Feb 24 09:07:56 crc systemd[1]: Starting Kubernetes Kubelet... Feb 24 09:07:56 crc restorecon[4691]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:07:56 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 09:07:57 crc restorecon[4691]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 09:07:57 crc restorecon[4691]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 24 09:07:58 crc kubenswrapper[4822]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 09:07:58 crc kubenswrapper[4822]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 24 09:07:58 crc kubenswrapper[4822]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 09:07:58 crc kubenswrapper[4822]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 09:07:58 crc kubenswrapper[4822]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 24 09:07:58 crc kubenswrapper[4822]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.053972 4822 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.061835 4822 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.061871 4822 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.061882 4822 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.061893 4822 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.061904 4822 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.061950 4822 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.061962 4822 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.061975 4822 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.061986 4822 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.061995 4822 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062003 4822 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062011 4822 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062020 4822 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062028 4822 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062036 4822 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062044 4822 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062052 4822 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062060 4822 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062068 4822 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062076 4822 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062084 4822 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062093 4822 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062101 4822 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062109 4822 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062116 4822 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062124 4822 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062131 4822 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062140 4822 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062148 4822 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062156 4822 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062164 4822 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062171 4822 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062181 4822 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062191 4822 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062199 4822 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062209 4822 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062218 4822 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062226 4822 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062233 4822 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062242 4822 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062249 4822 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062257 4822 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062265 4822 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062272 4822 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062281 4822 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062288 4822 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062296 4822 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062304 4822 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062312 4822 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062319 4822 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062328 4822 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062338 4822 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062351 4822 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062362 4822 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062370 4822 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062378 4822 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062387 4822 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062396 4822 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062403 4822 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062411 4822 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062419 4822 feature_gate.go:330] unrecognized feature gate: Example Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062428 4822 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062438 4822 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062447 4822 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062457 4822 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062466 4822 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062474 4822 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062484 4822 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062533 4822 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062541 4822 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.062549 4822 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063486 4822 flags.go:64] FLAG: --address="0.0.0.0" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063514 4822 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063528 4822 flags.go:64] FLAG: --anonymous-auth="true" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063540 4822 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063552 4822 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063561 4822 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063618 4822 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063630 4822 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063640 4822 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063649 4822 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063659 4822 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063669 4822 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063678 4822 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063687 4822 flags.go:64] FLAG: --cgroup-root="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063697 4822 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063706 4822 flags.go:64] FLAG: --client-ca-file="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063715 4822 flags.go:64] FLAG: --cloud-config="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063723 4822 flags.go:64] FLAG: --cloud-provider="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063732 4822 flags.go:64] FLAG: --cluster-dns="[]" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063744 4822 flags.go:64] FLAG: --cluster-domain="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063753 4822 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063762 4822 flags.go:64] FLAG: --config-dir="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063771 4822 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063781 4822 flags.go:64] FLAG: --container-log-max-files="5" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063791 4822 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063801 4822 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063810 4822 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063819 4822 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063828 4822 flags.go:64] FLAG: --contention-profiling="false" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063838 4822 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063848 4822 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063858 4822 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063867 4822 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063877 4822 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063886 4822 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063895 4822 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063904 4822 flags.go:64] FLAG: --enable-load-reader="false" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063947 4822 flags.go:64] FLAG: --enable-server="true" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063959 4822 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063978 4822 flags.go:64] FLAG: --event-burst="100" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.063990 4822 flags.go:64] FLAG: --event-qps="50" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064001 4822 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064010 4822 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064020 4822 flags.go:64] FLAG: --eviction-hard="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064031 4822 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064040 4822 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064050 4822 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064061 4822 flags.go:64] FLAG: --eviction-soft="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064070 4822 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064079 4822 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064090 4822 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064099 4822 flags.go:64] FLAG: --experimental-mounter-path="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064108 4822 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064117 4822 flags.go:64] FLAG: --fail-swap-on="true" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064127 4822 flags.go:64] FLAG: --feature-gates="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064137 4822 flags.go:64] FLAG: --file-check-frequency="20s" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064146 4822 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064156 4822 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064165 4822 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064175 4822 flags.go:64] FLAG: --healthz-port="10248" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064184 4822 flags.go:64] FLAG: --help="false" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064193 4822 flags.go:64] FLAG: --hostname-override="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064203 4822 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064212 4822 flags.go:64] FLAG: --http-check-frequency="20s" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064222 4822 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064230 4822 flags.go:64] FLAG: --image-credential-provider-config="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064239 4822 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064248 4822 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064257 4822 flags.go:64] FLAG: --image-service-endpoint="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064266 4822 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064275 4822 flags.go:64] FLAG: --kube-api-burst="100" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064284 4822 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064294 4822 flags.go:64] FLAG: --kube-api-qps="50" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064302 4822 flags.go:64] FLAG: --kube-reserved="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064311 4822 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064320 4822 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064329 4822 flags.go:64] FLAG: --kubelet-cgroups="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064338 4822 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064348 4822 flags.go:64] FLAG: --lock-file="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064356 4822 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064367 4822 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064376 4822 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064391 4822 flags.go:64] FLAG: --log-json-split-stream="false" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064402 4822 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064411 4822 flags.go:64] FLAG: --log-text-split-stream="false" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064420 4822 flags.go:64] FLAG: --logging-format="text" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064430 4822 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064439 4822 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064449 4822 flags.go:64] FLAG: --manifest-url="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064457 4822 flags.go:64] FLAG: --manifest-url-header="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064469 4822 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064478 4822 flags.go:64] FLAG: --max-open-files="1000000" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064489 4822 flags.go:64] FLAG: --max-pods="110" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064498 4822 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064507 4822 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064516 4822 flags.go:64] FLAG: --memory-manager-policy="None" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064525 4822 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064534 4822 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064544 4822 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064553 4822 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064572 4822 flags.go:64] FLAG: --node-status-max-images="50" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064581 4822 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064591 4822 flags.go:64] FLAG: --oom-score-adj="-999" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064601 4822 flags.go:64] FLAG: --pod-cidr="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064610 4822 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064623 4822 flags.go:64] FLAG: --pod-manifest-path="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064632 4822 flags.go:64] FLAG: --pod-max-pids="-1" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064641 4822 flags.go:64] FLAG: --pods-per-core="0" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064650 4822 flags.go:64] FLAG: --port="10250" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064659 4822 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064668 4822 flags.go:64] FLAG: --provider-id="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064677 4822 flags.go:64] FLAG: --qos-reserved="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064686 4822 flags.go:64] FLAG: --read-only-port="10255" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064696 4822 flags.go:64] FLAG: --register-node="true" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064705 4822 flags.go:64] FLAG: --register-schedulable="true" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064714 4822 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064728 4822 flags.go:64] FLAG: --registry-burst="10" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064737 4822 flags.go:64] FLAG: --registry-qps="5" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064746 4822 flags.go:64] FLAG: --reserved-cpus="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064756 4822 flags.go:64] FLAG: --reserved-memory="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064767 4822 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064776 4822 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064785 4822 flags.go:64] FLAG: --rotate-certificates="false" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064794 4822 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064803 4822 flags.go:64] FLAG: --runonce="false" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064812 4822 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064821 4822 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064830 4822 flags.go:64] FLAG: --seccomp-default="false" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064839 4822 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064849 4822 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064858 4822 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064867 4822 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064876 4822 flags.go:64] FLAG: --storage-driver-password="root" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064885 4822 flags.go:64] FLAG: --storage-driver-secure="false" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064895 4822 flags.go:64] FLAG: --storage-driver-table="stats" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064904 4822 flags.go:64] FLAG: --storage-driver-user="root" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064942 4822 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064952 4822 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064961 4822 flags.go:64] FLAG: --system-cgroups="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064971 4822 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064986 4822 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.064995 4822 flags.go:64] FLAG: --tls-cert-file="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.065004 4822 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.065015 4822 flags.go:64] FLAG: --tls-min-version="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.065024 4822 flags.go:64] FLAG: --tls-private-key-file="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.065033 4822 flags.go:64] FLAG: --topology-manager-policy="none" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.065042 4822 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.065051 4822 flags.go:64] FLAG: --topology-manager-scope="container" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.065060 4822 flags.go:64] FLAG: --v="2" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.065072 4822 flags.go:64] FLAG: --version="false" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.065083 4822 flags.go:64] FLAG: --vmodule="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.065094 4822 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.065103 4822 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065309 4822 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065319 4822 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065331 4822 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065342 4822 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065350 4822 feature_gate.go:330] unrecognized feature gate: Example Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065360 4822 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065369 4822 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065377 4822 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065385 4822 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065392 4822 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065401 4822 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065408 4822 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065417 4822 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065427 4822 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065434 4822 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065442 4822 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065452 4822 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065463 4822 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065472 4822 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065482 4822 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065492 4822 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065501 4822 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065509 4822 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065518 4822 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065526 4822 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065535 4822 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065543 4822 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065550 4822 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065558 4822 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065566 4822 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065574 4822 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065581 4822 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065589 4822 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065597 4822 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065604 4822 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065612 4822 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065619 4822 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065627 4822 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065636 4822 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065646 4822 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065654 4822 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065662 4822 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065672 4822 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065681 4822 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065690 4822 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065698 4822 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065707 4822 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065715 4822 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065722 4822 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065730 4822 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065738 4822 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065745 4822 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065753 4822 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065761 4822 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065770 4822 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065777 4822 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065785 4822 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065793 4822 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065801 4822 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065809 4822 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065817 4822 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065824 4822 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065832 4822 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065839 4822 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065847 4822 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065854 4822 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065862 4822 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065869 4822 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065877 4822 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065885 4822 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.065893 4822 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.066985 4822 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.081388 4822 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.081450 4822 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081581 4822 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081596 4822 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081606 4822 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081616 4822 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081625 4822 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081633 4822 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081641 4822 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081649 4822 feature_gate.go:330] unrecognized feature gate: Example Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081657 4822 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081665 4822 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081673 4822 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081680 4822 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081688 4822 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081697 4822 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081705 4822 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081713 4822 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081722 4822 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081730 4822 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081738 4822 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081749 4822 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081762 4822 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081770 4822 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081778 4822 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081787 4822 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081794 4822 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081802 4822 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081810 4822 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081818 4822 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081826 4822 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081833 4822 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081841 4822 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081849 4822 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081857 4822 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081864 4822 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081877 4822 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081887 4822 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081896 4822 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081904 4822 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081940 4822 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081948 4822 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081957 4822 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081965 4822 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081973 4822 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081982 4822 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081990 4822 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.081998 4822 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082005 4822 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082013 4822 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082021 4822 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082030 4822 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082038 4822 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082045 4822 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082053 4822 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082061 4822 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082071 4822 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082080 4822 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082090 4822 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082101 4822 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082111 4822 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082120 4822 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082128 4822 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082136 4822 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082144 4822 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082151 4822 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082159 4822 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082168 4822 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082176 4822 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082183 4822 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082191 4822 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082199 4822 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082209 4822 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.082223 4822 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082449 4822 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082460 4822 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082469 4822 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082478 4822 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082489 4822 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082527 4822 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082536 4822 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082544 4822 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082554 4822 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082562 4822 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082570 4822 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082578 4822 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082585 4822 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082594 4822 feature_gate.go:330] unrecognized feature gate: Example Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082602 4822 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082610 4822 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082618 4822 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082625 4822 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082633 4822 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082641 4822 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082648 4822 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082656 4822 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082663 4822 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082671 4822 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082679 4822 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082689 4822 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082699 4822 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082707 4822 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082715 4822 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082723 4822 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082731 4822 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082739 4822 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082746 4822 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082754 4822 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082763 4822 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082771 4822 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082779 4822 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082786 4822 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082793 4822 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082801 4822 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082809 4822 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082817 4822 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082825 4822 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082836 4822 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082845 4822 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082856 4822 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082866 4822 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082875 4822 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082884 4822 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082892 4822 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082900 4822 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082907 4822 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082938 4822 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082946 4822 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082954 4822 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082962 4822 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082970 4822 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082977 4822 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082985 4822 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.082993 4822 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.083001 4822 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.083009 4822 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.083017 4822 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.083024 4822 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.083032 4822 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.083040 4822 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.083048 4822 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.083056 4822 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.083063 4822 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.083071 4822 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.083079 4822 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.083093 4822 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.083374 4822 server.go:940] "Client rotation is on, will bootstrap in background" Feb 24 09:07:58 crc kubenswrapper[4822]: E0224 09:07:58.091962 4822 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.096663 4822 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.096839 4822 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.098851 4822 server.go:997] "Starting client certificate rotation" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.098901 4822 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.099170 4822 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.126764 4822 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 24 09:07:58 crc kubenswrapper[4822]: E0224 09:07:58.128610 4822 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.132466 4822 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.152081 4822 log.go:25] "Validated CRI v1 runtime API" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.186428 4822 log.go:25] "Validated CRI v1 image API" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.190865 4822 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.198575 4822 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-24-09-02-58-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.198628 4822 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.231509 4822 manager.go:217] Machine: {Timestamp:2026-02-24 09:07:58.225111301 +0000 UTC m=+0.612873919 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:a2c96f03-56a9-40d5-9ba9-563a1da7316d BootID:a5c8732e-3240-474d-97f2-cd9f2c6e22aa Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:4d:50:95 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:4d:50:95 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:0d:af:99 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:3d:9d:98 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:5e:a1:b3 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:66:a3:40 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:fe:b0:02:09:40:1f Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:1a:b1:2e:1d:8b:0e Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.231884 4822 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.232115 4822 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.232702 4822 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.233041 4822 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.233099 4822 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.233457 4822 topology_manager.go:138] "Creating topology manager with none policy" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.233476 4822 container_manager_linux.go:303] "Creating device plugin manager" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.234332 4822 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.234390 4822 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.235515 4822 state_mem.go:36] "Initialized new in-memory state store" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.235682 4822 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.240276 4822 kubelet.go:418] "Attempting to sync node with API server" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.240502 4822 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.240557 4822 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.240583 4822 kubelet.go:324] "Adding apiserver pod source" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.240607 4822 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.245763 4822 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.247125 4822 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.247937 4822 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 24 09:07:58 crc kubenswrapper[4822]: E0224 09:07:58.248040 4822 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.248002 4822 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 24 09:07:58 crc kubenswrapper[4822]: E0224 09:07:58.248121 4822 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.257532 4822 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.260303 4822 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.260502 4822 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.260625 4822 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.260727 4822 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.260856 4822 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.260998 4822 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.261104 4822 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.261213 4822 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.261318 4822 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.261419 4822 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.261521 4822 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.261638 4822 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.263677 4822 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.264570 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.264794 4822 server.go:1280] "Started kubelet" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.265962 4822 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.265966 4822 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 24 09:07:58 crc systemd[1]: Started Kubernetes Kubelet. Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.266870 4822 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.269196 4822 server.go:460] "Adding debug handlers to kubelet server" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.269890 4822 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.269988 4822 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.270100 4822 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.270136 4822 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.270243 4822 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 24 09:07:58 crc kubenswrapper[4822]: E0224 09:07:58.270267 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.270687 4822 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 24 09:07:58 crc kubenswrapper[4822]: E0224 09:07:58.270726 4822 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.272444 4822 factory.go:55] Registering systemd factory Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.272511 4822 factory.go:221] Registration of the systemd container factory successfully Feb 24 09:07:58 crc kubenswrapper[4822]: E0224 09:07:58.272697 4822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="200ms" Feb 24 09:07:58 crc kubenswrapper[4822]: E0224 09:07:58.270622 4822 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.164:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1897238fa5ec11ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:58.264586668 +0000 UTC m=+0.652349256,LastTimestamp:2026-02-24 09:07:58.264586668 +0000 UTC m=+0.652349256,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.277808 4822 factory.go:153] Registering CRI-O factory Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.278652 4822 factory.go:221] Registration of the crio container factory successfully Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.278833 4822 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.278885 4822 factory.go:103] Registering Raw factory Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.278986 4822 manager.go:1196] Started watching for new ooms in manager Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.280287 4822 manager.go:319] Starting recovery of all containers Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.291075 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.291177 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.291208 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.291233 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.291257 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.291296 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.291321 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.291348 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.291376 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.291400 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.291423 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.291447 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.291473 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.291503 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.291531 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.291556 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.291581 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.291606 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.291629 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.291655 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.291682 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.291764 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.291794 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.291861 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.291890 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.291954 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.292563 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.292627 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.292694 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.292999 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293036 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293056 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293078 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293098 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293118 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293137 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293156 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293175 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293194 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293214 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293269 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293292 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293312 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293368 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293388 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293409 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293428 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293447 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293467 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293487 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293507 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293527 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293559 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293582 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293604 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293625 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293644 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293667 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293687 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293709 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293728 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293751 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293773 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293792 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293812 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293831 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293852 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293872 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293892 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293938 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293959 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293977 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.293996 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.294016 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.294034 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.294054 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.294074 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.294107 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.294127 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.294146 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.294164 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.294217 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.294239 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.294259 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.294277 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.294303 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.294323 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.294342 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.294360 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.294379 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.294398 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.294418 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.294438 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.296412 4822 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.296454 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.296477 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.296508 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.296526 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.296547 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.296567 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.296586 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.296607 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.296636 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.296654 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.296674 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.296703 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.296727 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.296750 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.296772 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.296794 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.296815 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.296833 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.296854 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.296873 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.296892 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.296963 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.296982 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297003 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297022 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297040 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297059 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297081 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297100 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297119 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297148 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297170 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297190 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297208 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297228 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297246 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297267 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297286 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297306 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297326 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297346 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297365 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297384 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297404 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297427 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297446 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297465 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297484 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297503 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297521 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297541 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297560 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297580 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297598 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297618 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297640 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297659 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297678 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297698 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297728 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297750 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297770 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297787 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297808 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297828 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297847 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297869 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297887 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297935 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297954 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297972 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.297991 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298010 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298028 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298045 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298063 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298085 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298112 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298131 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298159 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298185 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298202 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298228 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298295 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298315 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298340 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298362 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298379 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298397 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298415 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298432 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298453 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298571 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298590 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298610 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298629 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298648 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298666 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298684 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298702 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298733 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298751 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298771 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298789 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298807 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298825 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298843 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298862 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298884 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298902 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298943 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298962 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298979 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.298999 4822 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.299017 4822 reconstruct.go:97] "Volume reconstruction finished" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.299029 4822 reconciler.go:26] "Reconciler: start to sync state" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.320256 4822 manager.go:324] Recovery completed Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.332772 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.333362 4822 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.334382 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.334437 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.334454 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.335833 4822 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.335856 4822 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.335944 4822 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.335994 4822 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.336031 4822 kubelet.go:2335] "Starting kubelet main sync loop" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.336169 4822 state_mem.go:36] "Initialized new in-memory state store" Feb 24 09:07:58 crc kubenswrapper[4822]: E0224 09:07:58.336173 4822 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 24 09:07:58 crc kubenswrapper[4822]: W0224 09:07:58.338369 4822 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 24 09:07:58 crc kubenswrapper[4822]: E0224 09:07:58.338489 4822 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.360239 4822 policy_none.go:49] "None policy: Start" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.361518 4822 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.361729 4822 state_mem.go:35] "Initializing new in-memory state store" Feb 24 09:07:58 crc kubenswrapper[4822]: E0224 09:07:58.371333 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:07:58 crc kubenswrapper[4822]: E0224 09:07:58.437248 4822 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.459361 4822 manager.go:334] "Starting Device Plugin manager" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.460314 4822 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.460357 4822 server.go:79] "Starting device plugin registration server" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.461089 4822 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.461192 4822 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.461673 4822 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.461840 4822 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.461863 4822 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 24 09:07:58 crc kubenswrapper[4822]: E0224 09:07:58.474286 4822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="400ms" Feb 24 09:07:58 crc kubenswrapper[4822]: E0224 09:07:58.477996 4822 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.562459 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.564419 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.564506 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.564533 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.564576 4822 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:07:58 crc kubenswrapper[4822]: E0224 09:07:58.565370 4822 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.164:6443: connect: connection refused" node="crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.638695 4822 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.638906 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.640467 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.640520 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.640532 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.640727 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.641117 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.641190 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.641779 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.641802 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.641814 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.641905 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.642224 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.642321 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.642412 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.642435 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.642447 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.642716 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.642765 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.642782 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.642968 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.643092 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.643183 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.643575 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.643605 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.643617 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.644338 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.644361 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.644371 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.644424 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.644468 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.644492 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.644706 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.644868 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.644947 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.645973 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.645995 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.646006 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.646086 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.646114 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.646130 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.646331 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.646364 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.647519 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.647570 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.647592 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.718882 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.718928 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.718950 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.718964 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.718981 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.719025 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.719062 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.719092 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.719124 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.719187 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.719425 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.719449 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.719534 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.719591 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.719645 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.766327 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.768192 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.768277 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.768297 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.768345 4822 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:07:58 crc kubenswrapper[4822]: E0224 09:07:58.769316 4822 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.164:6443: connect: connection refused" node="crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.821089 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.821185 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.821243 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.821289 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.821322 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.821359 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.821391 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.821421 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.821418 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.821455 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.821472 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.821490 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.821496 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.821546 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.821510 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.821561 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.821431 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.821604 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.821571 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.821521 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.821520 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.821434 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.821702 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.821749 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.821809 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.821864 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.821858 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.821980 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.821990 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.822034 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 09:07:58 crc kubenswrapper[4822]: E0224 09:07:58.875978 4822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="800ms" Feb 24 09:07:58 crc kubenswrapper[4822]: I0224 09:07:58.979282 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 24 09:07:59 crc kubenswrapper[4822]: I0224 09:07:59.007470 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:07:59 crc kubenswrapper[4822]: I0224 09:07:59.020332 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:07:59 crc kubenswrapper[4822]: W0224 09:07:59.030122 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-51616e6b1bb858a7108f5308dccb18e6e795ec2ec2294560d528ac89503eff82 WatchSource:0}: Error finding container 51616e6b1bb858a7108f5308dccb18e6e795ec2ec2294560d528ac89503eff82: Status 404 returned error can't find the container with id 51616e6b1bb858a7108f5308dccb18e6e795ec2ec2294560d528ac89503eff82 Feb 24 09:07:59 crc kubenswrapper[4822]: I0224 09:07:59.038963 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 09:07:59 crc kubenswrapper[4822]: I0224 09:07:59.045845 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 09:07:59 crc kubenswrapper[4822]: W0224 09:07:59.051434 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-74ec6e316da95e690cab3b861ae2fc461efffaaab252ea7eb382e79e6e165a3e WatchSource:0}: Error finding container 74ec6e316da95e690cab3b861ae2fc461efffaaab252ea7eb382e79e6e165a3e: Status 404 returned error can't find the container with id 74ec6e316da95e690cab3b861ae2fc461efffaaab252ea7eb382e79e6e165a3e Feb 24 09:07:59 crc kubenswrapper[4822]: W0224 09:07:59.055668 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-f10180321d489f62db96a9d5b402f05da3e5431af5c4e352622d55b9dff25f77 WatchSource:0}: Error finding container f10180321d489f62db96a9d5b402f05da3e5431af5c4e352622d55b9dff25f77: Status 404 returned error can't find the container with id f10180321d489f62db96a9d5b402f05da3e5431af5c4e352622d55b9dff25f77 Feb 24 09:07:59 crc kubenswrapper[4822]: W0224 09:07:59.063807 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-136e46e0a9c67a97ccaf2144cdb563998e7f5735e289922911dab808efe5d0d1 WatchSource:0}: Error finding container 136e46e0a9c67a97ccaf2144cdb563998e7f5735e289922911dab808efe5d0d1: Status 404 returned error can't find the container with id 136e46e0a9c67a97ccaf2144cdb563998e7f5735e289922911dab808efe5d0d1 Feb 24 09:07:59 crc kubenswrapper[4822]: W0224 09:07:59.080265 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-f62c36c71b8dd0b2cff5c90597ace4add302009521c6fd8be52e0054072692ec WatchSource:0}: Error finding container f62c36c71b8dd0b2cff5c90597ace4add302009521c6fd8be52e0054072692ec: Status 404 returned error can't find the container with id f62c36c71b8dd0b2cff5c90597ace4add302009521c6fd8be52e0054072692ec Feb 24 09:07:59 crc kubenswrapper[4822]: W0224 09:07:59.145498 4822 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 24 09:07:59 crc kubenswrapper[4822]: E0224 09:07:59.145631 4822 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 24 09:07:59 crc kubenswrapper[4822]: I0224 09:07:59.170193 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:07:59 crc kubenswrapper[4822]: I0224 09:07:59.171821 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:07:59 crc kubenswrapper[4822]: I0224 09:07:59.171871 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:07:59 crc kubenswrapper[4822]: I0224 09:07:59.171887 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:07:59 crc kubenswrapper[4822]: I0224 09:07:59.171942 4822 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:07:59 crc kubenswrapper[4822]: E0224 09:07:59.172475 4822 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.164:6443: connect: connection refused" node="crc" Feb 24 09:07:59 crc kubenswrapper[4822]: W0224 09:07:59.189235 4822 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 24 09:07:59 crc kubenswrapper[4822]: E0224 09:07:59.189311 4822 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 24 09:07:59 crc kubenswrapper[4822]: I0224 09:07:59.266373 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 24 09:07:59 crc kubenswrapper[4822]: I0224 09:07:59.347410 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"51616e6b1bb858a7108f5308dccb18e6e795ec2ec2294560d528ac89503eff82"} Feb 24 09:07:59 crc kubenswrapper[4822]: I0224 09:07:59.349411 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"f62c36c71b8dd0b2cff5c90597ace4add302009521c6fd8be52e0054072692ec"} Feb 24 09:07:59 crc kubenswrapper[4822]: I0224 09:07:59.351267 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"136e46e0a9c67a97ccaf2144cdb563998e7f5735e289922911dab808efe5d0d1"} Feb 24 09:07:59 crc kubenswrapper[4822]: I0224 09:07:59.352735 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f10180321d489f62db96a9d5b402f05da3e5431af5c4e352622d55b9dff25f77"} Feb 24 09:07:59 crc kubenswrapper[4822]: I0224 09:07:59.354315 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"74ec6e316da95e690cab3b861ae2fc461efffaaab252ea7eb382e79e6e165a3e"} Feb 24 09:07:59 crc kubenswrapper[4822]: W0224 09:07:59.565308 4822 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 24 09:07:59 crc kubenswrapper[4822]: E0224 09:07:59.565429 4822 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 24 09:07:59 crc kubenswrapper[4822]: E0224 09:07:59.677810 4822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="1.6s" Feb 24 09:07:59 crc kubenswrapper[4822]: W0224 09:07:59.678259 4822 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 24 09:07:59 crc kubenswrapper[4822]: E0224 09:07:59.678330 4822 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 24 09:07:59 crc kubenswrapper[4822]: I0224 09:07:59.972793 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:07:59 crc kubenswrapper[4822]: I0224 09:07:59.975361 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:07:59 crc kubenswrapper[4822]: I0224 09:07:59.975430 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:07:59 crc kubenswrapper[4822]: I0224 09:07:59.975448 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:07:59 crc kubenswrapper[4822]: I0224 09:07:59.975485 4822 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:07:59 crc kubenswrapper[4822]: E0224 09:07:59.976148 4822 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.164:6443: connect: connection refused" node="crc" Feb 24 09:08:00 crc kubenswrapper[4822]: I0224 09:08:00.242934 4822 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 24 09:08:00 crc kubenswrapper[4822]: E0224 09:08:00.244018 4822 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 24 09:08:00 crc kubenswrapper[4822]: I0224 09:08:00.266009 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 24 09:08:00 crc kubenswrapper[4822]: I0224 09:08:00.359858 4822 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f3bf9901f8a5e1a52726573e43d614a77c1305b59426fead5d4f15e27c3ef06d" exitCode=0 Feb 24 09:08:00 crc kubenswrapper[4822]: I0224 09:08:00.359973 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f3bf9901f8a5e1a52726573e43d614a77c1305b59426fead5d4f15e27c3ef06d"} Feb 24 09:08:00 crc kubenswrapper[4822]: I0224 09:08:00.360096 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:00 crc kubenswrapper[4822]: I0224 09:08:00.361537 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:00 crc kubenswrapper[4822]: I0224 09:08:00.361586 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:00 crc kubenswrapper[4822]: I0224 09:08:00.361606 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:00 crc kubenswrapper[4822]: I0224 09:08:00.362577 4822 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="7aabc4776b2fe2c18694894241153d4af330a61847bbb0ca4e8554c9526112c1" exitCode=0 Feb 24 09:08:00 crc kubenswrapper[4822]: I0224 09:08:00.362698 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:00 crc kubenswrapper[4822]: I0224 09:08:00.363172 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"7aabc4776b2fe2c18694894241153d4af330a61847bbb0ca4e8554c9526112c1"} Feb 24 09:08:00 crc kubenswrapper[4822]: I0224 09:08:00.364396 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:00 crc kubenswrapper[4822]: I0224 09:08:00.364453 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:00 crc kubenswrapper[4822]: I0224 09:08:00.364472 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:00 crc kubenswrapper[4822]: I0224 09:08:00.367503 4822 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="f1ef5ebe08827dc25ed17949b250929b0d9e9b5c3a68aad23ca6ba0b71c28462" exitCode=0 Feb 24 09:08:00 crc kubenswrapper[4822]: I0224 09:08:00.367546 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:00 crc kubenswrapper[4822]: I0224 09:08:00.367547 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"f1ef5ebe08827dc25ed17949b250929b0d9e9b5c3a68aad23ca6ba0b71c28462"} Feb 24 09:08:00 crc kubenswrapper[4822]: I0224 09:08:00.368310 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:00 crc kubenswrapper[4822]: I0224 09:08:00.368334 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:00 crc kubenswrapper[4822]: I0224 09:08:00.368345 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:00 crc kubenswrapper[4822]: I0224 09:08:00.372301 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e5119b48d4016d083d02c6ccdb47e89a0d6b90e9907e9d21c0a37236be086453"} Feb 24 09:08:00 crc kubenswrapper[4822]: I0224 09:08:00.372354 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"161dc5372c480c4c6401e2a366da2472613d6e1b70e07ed5d2b13e605149b71b"} Feb 24 09:08:00 crc kubenswrapper[4822]: I0224 09:08:00.372376 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2fd8556c3d55934657eb33e924c817164ec4c4ee3daf720283bd5d5a5eb849d2"} Feb 24 09:08:00 crc kubenswrapper[4822]: I0224 09:08:00.375898 4822 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c" exitCode=0 Feb 24 09:08:00 crc kubenswrapper[4822]: I0224 09:08:00.376016 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c"} Feb 24 09:08:00 crc kubenswrapper[4822]: I0224 09:08:00.376173 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:00 crc kubenswrapper[4822]: I0224 09:08:00.378258 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:00 crc kubenswrapper[4822]: I0224 09:08:00.378319 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:00 crc kubenswrapper[4822]: I0224 09:08:00.378340 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:00 crc kubenswrapper[4822]: I0224 09:08:00.384599 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:00 crc kubenswrapper[4822]: I0224 09:08:00.385589 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:00 crc kubenswrapper[4822]: I0224 09:08:00.385646 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:00 crc kubenswrapper[4822]: I0224 09:08:00.385666 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:01 crc kubenswrapper[4822]: I0224 09:08:01.266011 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 24 09:08:01 crc kubenswrapper[4822]: E0224 09:08:01.278894 4822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="3.2s" Feb 24 09:08:01 crc kubenswrapper[4822]: I0224 09:08:01.382316 4822 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="9a6b9168effe46ebec3888f923da1ee54e392258872f44ffabc71f5ec52ef4d0" exitCode=0 Feb 24 09:08:01 crc kubenswrapper[4822]: I0224 09:08:01.382473 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:01 crc kubenswrapper[4822]: I0224 09:08:01.382874 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"9a6b9168effe46ebec3888f923da1ee54e392258872f44ffabc71f5ec52ef4d0"} Feb 24 09:08:01 crc kubenswrapper[4822]: I0224 09:08:01.383315 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:01 crc kubenswrapper[4822]: I0224 09:08:01.383345 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:01 crc kubenswrapper[4822]: I0224 09:08:01.383353 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:01 crc kubenswrapper[4822]: I0224 09:08:01.386492 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"44bdde25238c492704d6d80e1023ad200b9fd1bd5d319b516ccc24b2aeea4fd0"} Feb 24 09:08:01 crc kubenswrapper[4822]: I0224 09:08:01.386615 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:01 crc kubenswrapper[4822]: I0224 09:08:01.387885 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:01 crc kubenswrapper[4822]: I0224 09:08:01.387942 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:01 crc kubenswrapper[4822]: I0224 09:08:01.387954 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:01 crc kubenswrapper[4822]: I0224 09:08:01.394039 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"30bed1504be2ca00734175abe924a77562966abc782073db4d997ecf4bbb92c9"} Feb 24 09:08:01 crc kubenswrapper[4822]: I0224 09:08:01.394084 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9e084b810bf294f0dc2565fa6199fa484a66a33d2d9a1c2192a71e2a6fbd6cc4"} Feb 24 09:08:01 crc kubenswrapper[4822]: I0224 09:08:01.394095 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6bb23ad458041922bc4ccc692449b952d9aab77f8badb82f1f33355cb82f899c"} Feb 24 09:08:01 crc kubenswrapper[4822]: I0224 09:08:01.394094 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:01 crc kubenswrapper[4822]: I0224 09:08:01.395755 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:01 crc kubenswrapper[4822]: I0224 09:08:01.395782 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:01 crc kubenswrapper[4822]: I0224 09:08:01.395789 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:01 crc kubenswrapper[4822]: I0224 09:08:01.400383 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1e1dc0ba5c502740227408242b13ac68bf465d12f051a771295d69c245bc2b2b"} Feb 24 09:08:01 crc kubenswrapper[4822]: I0224 09:08:01.400462 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:01 crc kubenswrapper[4822]: I0224 09:08:01.401443 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:01 crc kubenswrapper[4822]: I0224 09:08:01.401465 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:01 crc kubenswrapper[4822]: I0224 09:08:01.401474 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:01 crc kubenswrapper[4822]: I0224 09:08:01.410647 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290"} Feb 24 09:08:01 crc kubenswrapper[4822]: I0224 09:08:01.410705 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0"} Feb 24 09:08:01 crc kubenswrapper[4822]: I0224 09:08:01.410732 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3"} Feb 24 09:08:01 crc kubenswrapper[4822]: I0224 09:08:01.410749 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99"} Feb 24 09:08:01 crc kubenswrapper[4822]: I0224 09:08:01.518599 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:08:01 crc kubenswrapper[4822]: I0224 09:08:01.576967 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:01 crc kubenswrapper[4822]: I0224 09:08:01.578092 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:01 crc kubenswrapper[4822]: I0224 09:08:01.578134 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:01 crc kubenswrapper[4822]: I0224 09:08:01.578146 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:01 crc kubenswrapper[4822]: I0224 09:08:01.578171 4822 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:08:01 crc kubenswrapper[4822]: E0224 09:08:01.578591 4822 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.164:6443: connect: connection refused" node="crc" Feb 24 09:08:01 crc kubenswrapper[4822]: W0224 09:08:01.678510 4822 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 24 09:08:01 crc kubenswrapper[4822]: E0224 09:08:01.678604 4822 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 24 09:08:01 crc kubenswrapper[4822]: W0224 09:08:01.828748 4822 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 24 09:08:01 crc kubenswrapper[4822]: E0224 09:08:01.828847 4822 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 24 09:08:02 crc kubenswrapper[4822]: I0224 09:08:02.426157 4822 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f0d6ba901fd909408637d654debaf0ccb14392ed6d2fa294e8ffb443f7a9ef80" exitCode=0 Feb 24 09:08:02 crc kubenswrapper[4822]: I0224 09:08:02.426259 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f0d6ba901fd909408637d654debaf0ccb14392ed6d2fa294e8ffb443f7a9ef80"} Feb 24 09:08:02 crc kubenswrapper[4822]: I0224 09:08:02.426332 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:02 crc kubenswrapper[4822]: I0224 09:08:02.428101 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:02 crc kubenswrapper[4822]: I0224 09:08:02.428157 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:02 crc kubenswrapper[4822]: I0224 09:08:02.428174 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:02 crc kubenswrapper[4822]: I0224 09:08:02.432158 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a90cb1a3a9ba11ffda22d401b9cebdf01a402961c28141dadbaa7d953a1f0917"} Feb 24 09:08:02 crc kubenswrapper[4822]: I0224 09:08:02.432260 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:02 crc kubenswrapper[4822]: I0224 09:08:02.432278 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:02 crc kubenswrapper[4822]: I0224 09:08:02.432311 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:02 crc kubenswrapper[4822]: I0224 09:08:02.432370 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:02 crc kubenswrapper[4822]: I0224 09:08:02.432398 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 09:08:02 crc kubenswrapper[4822]: I0224 09:08:02.435343 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:02 crc kubenswrapper[4822]: I0224 09:08:02.435380 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:02 crc kubenswrapper[4822]: I0224 09:08:02.435397 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:02 crc kubenswrapper[4822]: I0224 09:08:02.436291 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:02 crc kubenswrapper[4822]: I0224 09:08:02.436320 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:02 crc kubenswrapper[4822]: I0224 09:08:02.436335 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:02 crc kubenswrapper[4822]: I0224 09:08:02.437197 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:02 crc kubenswrapper[4822]: I0224 09:08:02.437230 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:02 crc kubenswrapper[4822]: I0224 09:08:02.437250 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:02 crc kubenswrapper[4822]: I0224 09:08:02.438146 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:02 crc kubenswrapper[4822]: I0224 09:08:02.438175 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:02 crc kubenswrapper[4822]: I0224 09:08:02.438192 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:02 crc kubenswrapper[4822]: I0224 09:08:02.806388 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:08:03 crc kubenswrapper[4822]: I0224 09:08:03.262368 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:08:03 crc kubenswrapper[4822]: I0224 09:08:03.272433 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:08:03 crc kubenswrapper[4822]: I0224 09:08:03.439647 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:03 crc kubenswrapper[4822]: I0224 09:08:03.439676 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:03 crc kubenswrapper[4822]: I0224 09:08:03.439664 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"dd9078ca880a9ab26a4423effb527cfd1cb2c5288403daf02462f1286ddac0a1"} Feb 24 09:08:03 crc kubenswrapper[4822]: I0224 09:08:03.439757 4822 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 09:08:03 crc kubenswrapper[4822]: I0224 09:08:03.439864 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:03 crc kubenswrapper[4822]: I0224 09:08:03.439753 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5f0ebb59005449668c1b492bdc6155afb2b414ca3abca8e05e2b43aa59d23886"} Feb 24 09:08:03 crc kubenswrapper[4822]: I0224 09:08:03.439991 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"eb6240de59556b2c3fd8e26937945a53c5f216dce135f0b0024c1fd8ba8b1fcf"} Feb 24 09:08:03 crc kubenswrapper[4822]: I0224 09:08:03.441258 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:03 crc kubenswrapper[4822]: I0224 09:08:03.441301 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:03 crc kubenswrapper[4822]: I0224 09:08:03.441353 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:03 crc kubenswrapper[4822]: I0224 09:08:03.441379 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:03 crc kubenswrapper[4822]: I0224 09:08:03.441314 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:03 crc kubenswrapper[4822]: I0224 09:08:03.441498 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:03 crc kubenswrapper[4822]: I0224 09:08:03.441640 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:03 crc kubenswrapper[4822]: I0224 09:08:03.441696 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:03 crc kubenswrapper[4822]: I0224 09:08:03.441718 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:04 crc kubenswrapper[4822]: I0224 09:08:04.370272 4822 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 24 09:08:04 crc kubenswrapper[4822]: I0224 09:08:04.447006 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d4764701e648ba7fc6285bf69cac5297af5a21b2e2b3904237e3afe0fd049b1b"} Feb 24 09:08:04 crc kubenswrapper[4822]: I0224 09:08:04.447077 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ba71fec71e1dff40dd70be718d077ad208ca4aa83726e186ddfee661bcd468a0"} Feb 24 09:08:04 crc kubenswrapper[4822]: I0224 09:08:04.447111 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:04 crc kubenswrapper[4822]: I0224 09:08:04.447164 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:04 crc kubenswrapper[4822]: I0224 09:08:04.447965 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:04 crc kubenswrapper[4822]: I0224 09:08:04.447991 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:04 crc kubenswrapper[4822]: I0224 09:08:04.448002 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:04 crc kubenswrapper[4822]: I0224 09:08:04.448157 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:04 crc kubenswrapper[4822]: I0224 09:08:04.448184 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:04 crc kubenswrapper[4822]: I0224 09:08:04.448193 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:04 crc kubenswrapper[4822]: I0224 09:08:04.519425 4822 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 09:08:04 crc kubenswrapper[4822]: I0224 09:08:04.519558 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 09:08:04 crc kubenswrapper[4822]: I0224 09:08:04.779651 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:04 crc kubenswrapper[4822]: I0224 09:08:04.781757 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:04 crc kubenswrapper[4822]: I0224 09:08:04.781811 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:04 crc kubenswrapper[4822]: I0224 09:08:04.781828 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:04 crc kubenswrapper[4822]: I0224 09:08:04.781874 4822 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:08:05 crc kubenswrapper[4822]: I0224 09:08:05.396869 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:08:05 crc kubenswrapper[4822]: I0224 09:08:05.397150 4822 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 09:08:05 crc kubenswrapper[4822]: I0224 09:08:05.397234 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:05 crc kubenswrapper[4822]: I0224 09:08:05.398895 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:05 crc kubenswrapper[4822]: I0224 09:08:05.398964 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:05 crc kubenswrapper[4822]: I0224 09:08:05.398978 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:05 crc kubenswrapper[4822]: I0224 09:08:05.449884 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:05 crc kubenswrapper[4822]: I0224 09:08:05.451073 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:05 crc kubenswrapper[4822]: I0224 09:08:05.451127 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:05 crc kubenswrapper[4822]: I0224 09:08:05.451147 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:05 crc kubenswrapper[4822]: I0224 09:08:05.712691 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:08:05 crc kubenswrapper[4822]: I0224 09:08:05.712970 4822 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 09:08:05 crc kubenswrapper[4822]: I0224 09:08:05.713039 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:05 crc kubenswrapper[4822]: I0224 09:08:05.714581 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:05 crc kubenswrapper[4822]: I0224 09:08:05.714651 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:05 crc kubenswrapper[4822]: I0224 09:08:05.714675 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:06 crc kubenswrapper[4822]: I0224 09:08:06.005905 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:08:06 crc kubenswrapper[4822]: I0224 09:08:06.234576 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 24 09:08:06 crc kubenswrapper[4822]: I0224 09:08:06.453591 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:06 crc kubenswrapper[4822]: I0224 09:08:06.453643 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:06 crc kubenswrapper[4822]: I0224 09:08:06.455159 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:06 crc kubenswrapper[4822]: I0224 09:08:06.455214 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:06 crc kubenswrapper[4822]: I0224 09:08:06.455218 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:06 crc kubenswrapper[4822]: I0224 09:08:06.455265 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:06 crc kubenswrapper[4822]: I0224 09:08:06.455288 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:06 crc kubenswrapper[4822]: I0224 09:08:06.455231 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:06 crc kubenswrapper[4822]: I0224 09:08:06.659957 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:08:06 crc kubenswrapper[4822]: I0224 09:08:06.660189 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:06 crc kubenswrapper[4822]: I0224 09:08:06.661651 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:06 crc kubenswrapper[4822]: I0224 09:08:06.661701 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:06 crc kubenswrapper[4822]: I0224 09:08:06.661717 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:08 crc kubenswrapper[4822]: E0224 09:08:08.478084 4822 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 09:08:08 crc kubenswrapper[4822]: I0224 09:08:08.930142 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 24 09:08:08 crc kubenswrapper[4822]: I0224 09:08:08.930312 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:08 crc kubenswrapper[4822]: I0224 09:08:08.931900 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:08 crc kubenswrapper[4822]: I0224 09:08:08.931997 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:08 crc kubenswrapper[4822]: I0224 09:08:08.932019 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:12 crc kubenswrapper[4822]: W0224 09:08:12.021060 4822 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 24 09:08:12 crc kubenswrapper[4822]: I0224 09:08:12.021203 4822 trace.go:236] Trace[119673817]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Feb-2026 09:08:02.020) (total time: 10001ms): Feb 24 09:08:12 crc kubenswrapper[4822]: Trace[119673817]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (09:08:12.021) Feb 24 09:08:12 crc kubenswrapper[4822]: Trace[119673817]: [10.00109142s] [10.00109142s] END Feb 24 09:08:12 crc kubenswrapper[4822]: E0224 09:08:12.021240 4822 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 24 09:08:12 crc kubenswrapper[4822]: I0224 09:08:12.266548 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 24 09:08:12 crc kubenswrapper[4822]: W0224 09:08:12.287581 4822 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 24 09:08:12 crc kubenswrapper[4822]: I0224 09:08:12.287720 4822 trace.go:236] Trace[1342244947]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Feb-2026 09:08:02.286) (total time: 10001ms): Feb 24 09:08:12 crc kubenswrapper[4822]: Trace[1342244947]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (09:08:12.287) Feb 24 09:08:12 crc kubenswrapper[4822]: Trace[1342244947]: [10.00113183s] [10.00113183s] END Feb 24 09:08:12 crc kubenswrapper[4822]: E0224 09:08:12.287756 4822 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 24 09:08:12 crc kubenswrapper[4822]: E0224 09:08:12.783222 4822 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.1897238fa5ec11ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:58.264586668 +0000 UTC m=+0.652349256,LastTimestamp:2026-02-24 09:07:58.264586668 +0000 UTC m=+0.652349256,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:12 crc kubenswrapper[4822]: I0224 09:08:12.814343 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:08:12 crc kubenswrapper[4822]: I0224 09:08:12.814534 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:12 crc kubenswrapper[4822]: I0224 09:08:12.815710 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:12 crc kubenswrapper[4822]: I0224 09:08:12.815747 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:12 crc kubenswrapper[4822]: I0224 09:08:12.815758 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:13 crc kubenswrapper[4822]: I0224 09:08:13.475375 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 24 09:08:13 crc kubenswrapper[4822]: I0224 09:08:13.478354 4822 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a90cb1a3a9ba11ffda22d401b9cebdf01a402961c28141dadbaa7d953a1f0917" exitCode=255 Feb 24 09:08:13 crc kubenswrapper[4822]: I0224 09:08:13.478416 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a90cb1a3a9ba11ffda22d401b9cebdf01a402961c28141dadbaa7d953a1f0917"} Feb 24 09:08:13 crc kubenswrapper[4822]: I0224 09:08:13.478639 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:13 crc kubenswrapper[4822]: I0224 09:08:13.479921 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:13 crc kubenswrapper[4822]: I0224 09:08:13.479991 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:13 crc kubenswrapper[4822]: I0224 09:08:13.480009 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:13 crc kubenswrapper[4822]: I0224 09:08:13.480867 4822 scope.go:117] "RemoveContainer" containerID="a90cb1a3a9ba11ffda22d401b9cebdf01a402961c28141dadbaa7d953a1f0917" Feb 24 09:08:13 crc kubenswrapper[4822]: W0224 09:08:13.666471 4822 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:13Z is after 2026-02-23T05:33:13Z Feb 24 09:08:13 crc kubenswrapper[4822]: E0224 09:08:13.666549 4822 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:13Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 09:08:13 crc kubenswrapper[4822]: I0224 09:08:13.672585 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:13Z is after 2026-02-23T05:33:13Z Feb 24 09:08:13 crc kubenswrapper[4822]: W0224 09:08:13.685538 4822 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:13Z is after 2026-02-23T05:33:13Z Feb 24 09:08:13 crc kubenswrapper[4822]: E0224 09:08:13.685649 4822 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:13Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 09:08:13 crc kubenswrapper[4822]: I0224 09:08:13.686793 4822 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 24 09:08:13 crc kubenswrapper[4822]: I0224 09:08:13.686869 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 24 09:08:13 crc kubenswrapper[4822]: E0224 09:08:13.689439 4822 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:13Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 09:08:13 crc kubenswrapper[4822]: E0224 09:08:13.690953 4822 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:13Z is after 2026-02-23T05:33:13Z" node="crc" Feb 24 09:08:13 crc kubenswrapper[4822]: E0224 09:08:13.692159 4822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:13Z is after 2026-02-23T05:33:13Z" interval="6.4s" Feb 24 09:08:13 crc kubenswrapper[4822]: I0224 09:08:13.692649 4822 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 24 09:08:13 crc kubenswrapper[4822]: I0224 09:08:13.692753 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 24 09:08:14 crc kubenswrapper[4822]: I0224 09:08:14.270959 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:14Z is after 2026-02-23T05:33:13Z Feb 24 09:08:14 crc kubenswrapper[4822]: I0224 09:08:14.486086 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 24 09:08:14 crc kubenswrapper[4822]: I0224 09:08:14.488466 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"08a43fc1b9042ae6f1841cb9ef1d2e3297e24addb5dc3e4d48cc676c098c8893"} Feb 24 09:08:14 crc kubenswrapper[4822]: I0224 09:08:14.488686 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:14 crc kubenswrapper[4822]: I0224 09:08:14.490072 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:14 crc kubenswrapper[4822]: I0224 09:08:14.490133 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:14 crc kubenswrapper[4822]: I0224 09:08:14.490198 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:14 crc kubenswrapper[4822]: I0224 09:08:14.518881 4822 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 09:08:14 crc kubenswrapper[4822]: I0224 09:08:14.519125 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 09:08:15 crc kubenswrapper[4822]: I0224 09:08:15.270228 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:15Z is after 2026-02-23T05:33:13Z Feb 24 09:08:15 crc kubenswrapper[4822]: I0224 09:08:15.406369 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:08:15 crc kubenswrapper[4822]: I0224 09:08:15.495140 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 24 09:08:15 crc kubenswrapper[4822]: I0224 09:08:15.496221 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 24 09:08:15 crc kubenswrapper[4822]: I0224 09:08:15.498871 4822 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="08a43fc1b9042ae6f1841cb9ef1d2e3297e24addb5dc3e4d48cc676c098c8893" exitCode=255 Feb 24 09:08:15 crc kubenswrapper[4822]: I0224 09:08:15.498998 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"08a43fc1b9042ae6f1841cb9ef1d2e3297e24addb5dc3e4d48cc676c098c8893"} Feb 24 09:08:15 crc kubenswrapper[4822]: I0224 09:08:15.499080 4822 scope.go:117] "RemoveContainer" containerID="a90cb1a3a9ba11ffda22d401b9cebdf01a402961c28141dadbaa7d953a1f0917" Feb 24 09:08:15 crc kubenswrapper[4822]: I0224 09:08:15.499094 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:15 crc kubenswrapper[4822]: I0224 09:08:15.500377 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:15 crc kubenswrapper[4822]: I0224 09:08:15.500424 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:15 crc kubenswrapper[4822]: I0224 09:08:15.500442 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:15 crc kubenswrapper[4822]: I0224 09:08:15.501384 4822 scope.go:117] "RemoveContainer" containerID="08a43fc1b9042ae6f1841cb9ef1d2e3297e24addb5dc3e4d48cc676c098c8893" Feb 24 09:08:15 crc kubenswrapper[4822]: E0224 09:08:15.501775 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:08:15 crc kubenswrapper[4822]: I0224 09:08:15.507699 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:08:15 crc kubenswrapper[4822]: I0224 09:08:15.790274 4822 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:08:16 crc kubenswrapper[4822]: I0224 09:08:16.006978 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:08:16 crc kubenswrapper[4822]: W0224 09:08:16.213473 4822 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:16Z is after 2026-02-23T05:33:13Z Feb 24 09:08:16 crc kubenswrapper[4822]: E0224 09:08:16.213570 4822 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:16Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 09:08:16 crc kubenswrapper[4822]: I0224 09:08:16.270235 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 24 09:08:16 crc kubenswrapper[4822]: I0224 09:08:16.270554 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:16 crc kubenswrapper[4822]: I0224 09:08:16.270816 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:16Z is after 2026-02-23T05:33:13Z Feb 24 09:08:16 crc kubenswrapper[4822]: I0224 09:08:16.272322 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:16 crc kubenswrapper[4822]: I0224 09:08:16.272388 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:16 crc kubenswrapper[4822]: I0224 09:08:16.272405 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:16 crc kubenswrapper[4822]: I0224 09:08:16.295389 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 24 09:08:16 crc kubenswrapper[4822]: I0224 09:08:16.504856 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 24 09:08:16 crc kubenswrapper[4822]: I0224 09:08:16.508303 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:16 crc kubenswrapper[4822]: I0224 09:08:16.508316 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:16 crc kubenswrapper[4822]: I0224 09:08:16.510077 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:16 crc kubenswrapper[4822]: I0224 09:08:16.510139 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:16 crc kubenswrapper[4822]: I0224 09:08:16.510159 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:16 crc kubenswrapper[4822]: I0224 09:08:16.510689 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:16 crc kubenswrapper[4822]: I0224 09:08:16.510767 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:16 crc kubenswrapper[4822]: I0224 09:08:16.510796 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:16 crc kubenswrapper[4822]: I0224 09:08:16.511812 4822 scope.go:117] "RemoveContainer" containerID="08a43fc1b9042ae6f1841cb9ef1d2e3297e24addb5dc3e4d48cc676c098c8893" Feb 24 09:08:16 crc kubenswrapper[4822]: E0224 09:08:16.512223 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:08:16 crc kubenswrapper[4822]: W0224 09:08:16.583168 4822 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:16Z is after 2026-02-23T05:33:13Z Feb 24 09:08:16 crc kubenswrapper[4822]: E0224 09:08:16.583307 4822 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:16Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 09:08:17 crc kubenswrapper[4822]: I0224 09:08:17.271236 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:17Z is after 2026-02-23T05:33:13Z Feb 24 09:08:17 crc kubenswrapper[4822]: I0224 09:08:17.511977 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:17 crc kubenswrapper[4822]: I0224 09:08:17.513255 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:17 crc kubenswrapper[4822]: I0224 09:08:17.513321 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:17 crc kubenswrapper[4822]: I0224 09:08:17.513352 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:17 crc kubenswrapper[4822]: I0224 09:08:17.514450 4822 scope.go:117] "RemoveContainer" containerID="08a43fc1b9042ae6f1841cb9ef1d2e3297e24addb5dc3e4d48cc676c098c8893" Feb 24 09:08:17 crc kubenswrapper[4822]: E0224 09:08:17.514793 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:08:18 crc kubenswrapper[4822]: I0224 09:08:18.270111 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:18Z is after 2026-02-23T05:33:13Z Feb 24 09:08:18 crc kubenswrapper[4822]: E0224 09:08:18.478808 4822 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 09:08:19 crc kubenswrapper[4822]: I0224 09:08:19.271319 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:19Z is after 2026-02-23T05:33:13Z Feb 24 09:08:20 crc kubenswrapper[4822]: I0224 09:08:20.091423 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:20 crc kubenswrapper[4822]: I0224 09:08:20.093259 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:20 crc kubenswrapper[4822]: I0224 09:08:20.093340 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:20 crc kubenswrapper[4822]: I0224 09:08:20.093372 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:20 crc kubenswrapper[4822]: I0224 09:08:20.093416 4822 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:08:20 crc kubenswrapper[4822]: E0224 09:08:20.096563 4822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:20Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 24 09:08:20 crc kubenswrapper[4822]: E0224 09:08:20.098971 4822 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:20Z is after 2026-02-23T05:33:13Z" node="crc" Feb 24 09:08:20 crc kubenswrapper[4822]: I0224 09:08:20.270099 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:20Z is after 2026-02-23T05:33:13Z Feb 24 09:08:20 crc kubenswrapper[4822]: W0224 09:08:20.946338 4822 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:20Z is after 2026-02-23T05:33:13Z Feb 24 09:08:20 crc kubenswrapper[4822]: E0224 09:08:20.947099 4822 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:20Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 09:08:21 crc kubenswrapper[4822]: I0224 09:08:21.271088 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:21Z is after 2026-02-23T05:33:13Z Feb 24 09:08:22 crc kubenswrapper[4822]: I0224 09:08:22.066724 4822 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 24 09:08:22 crc kubenswrapper[4822]: E0224 09:08:22.075554 4822 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:22Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 09:08:22 crc kubenswrapper[4822]: I0224 09:08:22.268568 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:22Z is after 2026-02-23T05:33:13Z Feb 24 09:08:22 crc kubenswrapper[4822]: E0224 09:08:22.791246 4822 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:22Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1897238fa5ec11ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:58.264586668 +0000 UTC m=+0.652349256,LastTimestamp:2026-02-24 09:07:58.264586668 +0000 UTC m=+0.652349256,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:23 crc kubenswrapper[4822]: I0224 09:08:23.271022 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:23Z is after 2026-02-23T05:33:13Z Feb 24 09:08:24 crc kubenswrapper[4822]: W0224 09:08:24.268597 4822 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:24Z is after 2026-02-23T05:33:13Z Feb 24 09:08:24 crc kubenswrapper[4822]: E0224 09:08:24.268752 4822 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:24Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 09:08:24 crc kubenswrapper[4822]: I0224 09:08:24.270831 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:24Z is after 2026-02-23T05:33:13Z Feb 24 09:08:24 crc kubenswrapper[4822]: I0224 09:08:24.519899 4822 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 09:08:24 crc kubenswrapper[4822]: I0224 09:08:24.520091 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 09:08:24 crc kubenswrapper[4822]: I0224 09:08:24.520182 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:08:24 crc kubenswrapper[4822]: I0224 09:08:24.520445 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:24 crc kubenswrapper[4822]: I0224 09:08:24.522687 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:24 crc kubenswrapper[4822]: I0224 09:08:24.522795 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:24 crc kubenswrapper[4822]: I0224 09:08:24.522862 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:24 crc kubenswrapper[4822]: I0224 09:08:24.523723 4822 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"161dc5372c480c4c6401e2a366da2472613d6e1b70e07ed5d2b13e605149b71b"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 24 09:08:24 crc kubenswrapper[4822]: I0224 09:08:24.523944 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://161dc5372c480c4c6401e2a366da2472613d6e1b70e07ed5d2b13e605149b71b" gracePeriod=30 Feb 24 09:08:25 crc kubenswrapper[4822]: I0224 09:08:25.271253 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:25Z is after 2026-02-23T05:33:13Z Feb 24 09:08:25 crc kubenswrapper[4822]: I0224 09:08:25.539834 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 24 09:08:25 crc kubenswrapper[4822]: I0224 09:08:25.540627 4822 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="161dc5372c480c4c6401e2a366da2472613d6e1b70e07ed5d2b13e605149b71b" exitCode=255 Feb 24 09:08:25 crc kubenswrapper[4822]: I0224 09:08:25.540683 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"161dc5372c480c4c6401e2a366da2472613d6e1b70e07ed5d2b13e605149b71b"} Feb 24 09:08:25 crc kubenswrapper[4822]: I0224 09:08:25.540785 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"049e050ef9f1991307128e875cc6a4e29240a5e7c14c7e35686cef5ea40075d7"} Feb 24 09:08:25 crc kubenswrapper[4822]: I0224 09:08:25.541084 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:25 crc kubenswrapper[4822]: I0224 09:08:25.542371 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:25 crc kubenswrapper[4822]: I0224 09:08:25.542429 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:25 crc kubenswrapper[4822]: I0224 09:08:25.542449 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:26 crc kubenswrapper[4822]: I0224 09:08:26.271598 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:26Z is after 2026-02-23T05:33:13Z Feb 24 09:08:26 crc kubenswrapper[4822]: I0224 09:08:26.659974 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:08:26 crc kubenswrapper[4822]: I0224 09:08:26.660188 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:26 crc kubenswrapper[4822]: I0224 09:08:26.661884 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:26 crc kubenswrapper[4822]: I0224 09:08:26.661983 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:26 crc kubenswrapper[4822]: I0224 09:08:26.662006 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:26 crc kubenswrapper[4822]: W0224 09:08:26.909206 4822 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:26Z is after 2026-02-23T05:33:13Z Feb 24 09:08:26 crc kubenswrapper[4822]: E0224 09:08:26.909320 4822 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:26Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 09:08:27 crc kubenswrapper[4822]: I0224 09:08:27.099656 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:27 crc kubenswrapper[4822]: I0224 09:08:27.101815 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:27 crc kubenswrapper[4822]: I0224 09:08:27.101871 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:27 crc kubenswrapper[4822]: I0224 09:08:27.101885 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:27 crc kubenswrapper[4822]: I0224 09:08:27.101940 4822 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:08:27 crc kubenswrapper[4822]: E0224 09:08:27.103280 4822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:27Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 24 09:08:27 crc kubenswrapper[4822]: E0224 09:08:27.107185 4822 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:27Z is after 2026-02-23T05:33:13Z" node="crc" Feb 24 09:08:27 crc kubenswrapper[4822]: W0224 09:08:27.256374 4822 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:27Z is after 2026-02-23T05:33:13Z Feb 24 09:08:27 crc kubenswrapper[4822]: E0224 09:08:27.256499 4822 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:27Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 09:08:27 crc kubenswrapper[4822]: I0224 09:08:27.270370 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:27Z is after 2026-02-23T05:33:13Z Feb 24 09:08:28 crc kubenswrapper[4822]: I0224 09:08:28.268597 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:28Z is after 2026-02-23T05:33:13Z Feb 24 09:08:28 crc kubenswrapper[4822]: E0224 09:08:28.481609 4822 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 09:08:29 crc kubenswrapper[4822]: I0224 09:08:29.269837 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:08:30 crc kubenswrapper[4822]: I0224 09:08:30.273635 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:08:31 crc kubenswrapper[4822]: I0224 09:08:31.272738 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:08:31 crc kubenswrapper[4822]: I0224 09:08:31.519117 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:08:31 crc kubenswrapper[4822]: I0224 09:08:31.519275 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:31 crc kubenswrapper[4822]: I0224 09:08:31.521338 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:31 crc kubenswrapper[4822]: I0224 09:08:31.521409 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:31 crc kubenswrapper[4822]: I0224 09:08:31.521428 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:32 crc kubenswrapper[4822]: I0224 09:08:32.274667 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:08:32 crc kubenswrapper[4822]: I0224 09:08:32.336955 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:32 crc kubenswrapper[4822]: I0224 09:08:32.338997 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:32 crc kubenswrapper[4822]: I0224 09:08:32.339061 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:32 crc kubenswrapper[4822]: I0224 09:08:32.339080 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:32 crc kubenswrapper[4822]: I0224 09:08:32.340131 4822 scope.go:117] "RemoveContainer" containerID="08a43fc1b9042ae6f1841cb9ef1d2e3297e24addb5dc3e4d48cc676c098c8893" Feb 24 09:08:32 crc kubenswrapper[4822]: E0224 09:08:32.799427 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897238fa5ec11ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:58.264586668 +0000 UTC m=+0.652349256,LastTimestamp:2026-02-24 09:07:58.264586668 +0000 UTC m=+0.652349256,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:32 crc kubenswrapper[4822]: E0224 09:08:32.806738 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897238faa15b761 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:58.334424929 +0000 UTC m=+0.722187517,LastTimestamp:2026-02-24 09:07:58.334424929 +0000 UTC m=+0.722187517,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:32 crc kubenswrapper[4822]: E0224 09:08:32.814853 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897238faa16125c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:58.33444822 +0000 UTC m=+0.722210808,LastTimestamp:2026-02-24 09:07:58.33444822 +0000 UTC m=+0.722210808,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:32 crc kubenswrapper[4822]: E0224 09:08:32.821790 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897238faa1655e6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:58.33446551 +0000 UTC m=+0.722228098,LastTimestamp:2026-02-24 09:07:58.33446551 +0000 UTC m=+0.722228098,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:32 crc kubenswrapper[4822]: E0224 09:08:32.827150 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897238fb26dd396 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:58.474417046 +0000 UTC m=+0.862179624,LastTimestamp:2026-02-24 09:07:58.474417046 +0000 UTC m=+0.862179624,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:32 crc kubenswrapper[4822]: E0224 09:08:32.833393 4822 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897238faa15b761\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897238faa15b761 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:58.334424929 +0000 UTC m=+0.722187517,LastTimestamp:2026-02-24 09:07:58.564476742 +0000 UTC m=+0.952239330,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:32 crc kubenswrapper[4822]: E0224 09:08:32.835449 4822 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897238faa16125c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897238faa16125c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:58.33444822 +0000 UTC m=+0.722210808,LastTimestamp:2026-02-24 09:07:58.564522893 +0000 UTC m=+0.952285471,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:32 crc kubenswrapper[4822]: E0224 09:08:32.842617 4822 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897238faa1655e6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897238faa1655e6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:58.33446551 +0000 UTC m=+0.722228098,LastTimestamp:2026-02-24 09:07:58.564545933 +0000 UTC m=+0.952308521,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:32 crc kubenswrapper[4822]: E0224 09:08:32.849522 4822 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897238faa15b761\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897238faa15b761 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:58.334424929 +0000 UTC m=+0.722187517,LastTimestamp:2026-02-24 09:07:58.640498545 +0000 UTC m=+1.028261093,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:32 crc kubenswrapper[4822]: E0224 09:08:32.856636 4822 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897238faa16125c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897238faa16125c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:58.33444822 +0000 UTC m=+0.722210808,LastTimestamp:2026-02-24 09:07:58.640527615 +0000 UTC m=+1.028290163,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:32 crc kubenswrapper[4822]: E0224 09:08:32.865032 4822 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897238faa1655e6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897238faa1655e6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:58.33446551 +0000 UTC m=+0.722228098,LastTimestamp:2026-02-24 09:07:58.640537025 +0000 UTC m=+1.028299573,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:32 crc kubenswrapper[4822]: E0224 09:08:32.873603 4822 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897238faa15b761\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897238faa15b761 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:58.334424929 +0000 UTC m=+0.722187517,LastTimestamp:2026-02-24 09:07:58.641796413 +0000 UTC m=+1.029558971,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:32 crc kubenswrapper[4822]: E0224 09:08:32.880582 4822 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897238faa16125c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897238faa16125c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:58.33444822 +0000 UTC m=+0.722210808,LastTimestamp:2026-02-24 09:07:58.641808873 +0000 UTC m=+1.029571421,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:32 crc kubenswrapper[4822]: E0224 09:08:32.887479 4822 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897238faa1655e6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897238faa1655e6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:58.33446551 +0000 UTC m=+0.722228098,LastTimestamp:2026-02-24 09:07:58.641819453 +0000 UTC m=+1.029582001,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:32 crc kubenswrapper[4822]: E0224 09:08:32.893962 4822 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897238faa15b761\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897238faa15b761 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:58.334424929 +0000 UTC m=+0.722187517,LastTimestamp:2026-02-24 09:07:58.642424023 +0000 UTC m=+1.030186581,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:32 crc kubenswrapper[4822]: E0224 09:08:32.900342 4822 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897238faa16125c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897238faa16125c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:58.33444822 +0000 UTC m=+0.722210808,LastTimestamp:2026-02-24 09:07:58.642441673 +0000 UTC m=+1.030204221,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:32 crc kubenswrapper[4822]: E0224 09:08:32.907185 4822 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897238faa1655e6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897238faa1655e6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:58.33446551 +0000 UTC m=+0.722228098,LastTimestamp:2026-02-24 09:07:58.642453643 +0000 UTC m=+1.030216191,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:32 crc kubenswrapper[4822]: E0224 09:08:32.913589 4822 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897238faa15b761\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897238faa15b761 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:58.334424929 +0000 UTC m=+0.722187517,LastTimestamp:2026-02-24 09:07:58.642754418 +0000 UTC m=+1.030517006,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:32 crc kubenswrapper[4822]: E0224 09:08:32.920183 4822 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897238faa16125c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897238faa16125c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:58.33444822 +0000 UTC m=+0.722210808,LastTimestamp:2026-02-24 09:07:58.642777048 +0000 UTC m=+1.030539626,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:32 crc kubenswrapper[4822]: E0224 09:08:32.927210 4822 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897238faa1655e6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897238faa1655e6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:58.33446551 +0000 UTC m=+0.722228098,LastTimestamp:2026-02-24 09:07:58.642792368 +0000 UTC m=+1.030554946,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:32 crc kubenswrapper[4822]: E0224 09:08:32.934786 4822 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897238faa15b761\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897238faa15b761 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:58.334424929 +0000 UTC m=+0.722187517,LastTimestamp:2026-02-24 09:07:58.64359549 +0000 UTC m=+1.031358038,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:32 crc kubenswrapper[4822]: E0224 09:08:32.944008 4822 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897238faa16125c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897238faa16125c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:58.33444822 +0000 UTC m=+0.722210808,LastTimestamp:2026-02-24 09:07:58.64361308 +0000 UTC m=+1.031375628,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:32 crc kubenswrapper[4822]: E0224 09:08:32.952110 4822 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897238faa1655e6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897238faa1655e6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:58.33446551 +0000 UTC m=+0.722228098,LastTimestamp:2026-02-24 09:07:58.64362439 +0000 UTC m=+1.031386948,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:32 crc kubenswrapper[4822]: E0224 09:08:32.962354 4822 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897238faa15b761\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897238faa15b761 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:58.334424929 +0000 UTC m=+0.722187517,LastTimestamp:2026-02-24 09:07:58.644353331 +0000 UTC m=+1.032115879,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:32 crc kubenswrapper[4822]: E0224 09:08:32.971173 4822 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897238faa16125c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897238faa16125c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:58.33444822 +0000 UTC m=+0.722210808,LastTimestamp:2026-02-24 09:07:58.644367851 +0000 UTC m=+1.032130399,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:32 crc kubenswrapper[4822]: E0224 09:08:32.981167 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897238fd4519fc8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:59.04299412 +0000 UTC m=+1.430756708,LastTimestamp:2026-02-24 09:07:59.04299412 +0000 UTC m=+1.430756708,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:32 crc kubenswrapper[4822]: E0224 09:08:32.990767 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897238fd504bc67 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:59.054732391 +0000 UTC m=+1.442494979,LastTimestamp:2026-02-24 09:07:59.054732391 +0000 UTC m=+1.442494979,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:32 crc kubenswrapper[4822]: E0224 09:08:32.994815 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897238fd58ae6b0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:59.06352504 +0000 UTC m=+1.451287618,LastTimestamp:2026-02-24 09:07:59.06352504 +0000 UTC m=+1.451287618,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.001460 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897238fd5d905bb openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:59.068644795 +0000 UTC m=+1.456407373,LastTimestamp:2026-02-24 09:07:59.068644795 +0000 UTC m=+1.456407373,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.007505 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897238fd703a3f7 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:59.088215031 +0000 UTC m=+1.475977619,LastTimestamp:2026-02-24 09:07:59.088215031 +0000 UTC m=+1.475977619,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.011741 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897238ffa6214c0 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:59.681606848 +0000 UTC m=+2.069369396,LastTimestamp:2026-02-24 09:07:59.681606848 +0000 UTC m=+2.069369396,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.015434 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897238ffa637c53 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:59.681698899 +0000 UTC m=+2.069461457,LastTimestamp:2026-02-24 09:07:59.681698899 +0000 UTC m=+2.069461457,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.019292 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897238ffa63f683 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:59.681730179 +0000 UTC m=+2.069492727,LastTimestamp:2026-02-24 09:07:59.681730179 +0000 UTC m=+2.069492727,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.022860 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897238ffa649544 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:59.68177082 +0000 UTC m=+2.069533398,LastTimestamp:2026-02-24 09:07:59.68177082 +0000 UTC m=+2.069533398,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.029796 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897238ffad861c2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:59.68935981 +0000 UTC m=+2.077122398,LastTimestamp:2026-02-24 09:07:59.68935981 +0000 UTC m=+2.077122398,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.037415 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897238ffb6dfeb6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:59.699164854 +0000 UTC m=+2.086927452,LastTimestamp:2026-02-24 09:07:59.699164854 +0000 UTC m=+2.086927452,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.043211 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897238ffb7f6ff8 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:59.70030796 +0000 UTC m=+2.088070548,LastTimestamp:2026-02-24 09:07:59.70030796 +0000 UTC m=+2.088070548,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.048017 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897238ffb8f7980 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:59.701358976 +0000 UTC m=+2.089121564,LastTimestamp:2026-02-24 09:07:59.701358976 +0000 UTC m=+2.089121564,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.055082 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897238ffb90fda2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:59.701458338 +0000 UTC m=+2.089220926,LastTimestamp:2026-02-24 09:07:59.701458338 +0000 UTC m=+2.089220926,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.062057 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897238ffba0261e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:59.702451742 +0000 UTC m=+2.090214300,LastTimestamp:2026-02-24 09:07:59.702451742 +0000 UTC m=+2.090214300,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.069051 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897238ffbbbee23 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:59.704272419 +0000 UTC m=+2.092034967,LastTimestamp:2026-02-24 09:07:59.704272419 +0000 UTC m=+2.092034967,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.076191 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189723900f0d5a4c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:00.028375628 +0000 UTC m=+2.416138216,LastTimestamp:2026-02-24 09:08:00.028375628 +0000 UTC m=+2.416138216,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.081204 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189723901204b515 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:00.078140693 +0000 UTC m=+2.465903271,LastTimestamp:2026-02-24 09:08:00.078140693 +0000 UTC m=+2.465903271,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.085414 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189723901228c883 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:00.080504963 +0000 UTC m=+2.468267551,LastTimestamp:2026-02-24 09:08:00.080504963 +0000 UTC m=+2.468267551,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.091862 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897239020a4280b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:00.323471371 +0000 UTC m=+2.711233949,LastTimestamp:2026-02-24 09:08:00.323471371 +0000 UTC m=+2.711233949,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.100811 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189723902159c23d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:00.335372861 +0000 UTC m=+2.723135419,LastTimestamp:2026-02-24 09:08:00.335372861 +0000 UTC m=+2.723135419,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.108730 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18972390217879ae openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:00.337385902 +0000 UTC m=+2.725148490,LastTimestamp:2026-02-24 09:08:00.337385902 +0000 UTC m=+2.725148490,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.115342 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189723902306fd48 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:00.36350292 +0000 UTC m=+2.751265478,LastTimestamp:2026-02-24 09:08:00.36350292 +0000 UTC m=+2.751265478,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.121054 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18972390232cb2b0 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:00.365974192 +0000 UTC m=+2.753736750,LastTimestamp:2026-02-24 09:08:00.365974192 +0000 UTC m=+2.753736750,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.128510 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189723902378e6b8 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:00.370968248 +0000 UTC m=+2.758730816,LastTimestamp:2026-02-24 09:08:00.370968248 +0000 UTC m=+2.758730816,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.135541 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897239024440d9d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:00.384282013 +0000 UTC m=+2.772044571,LastTimestamp:2026-02-24 09:08:00.384282013 +0000 UTC m=+2.772044571,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.142485 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18972390310e2303 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:00.598852355 +0000 UTC m=+2.986614913,LastTimestamp:2026-02-24 09:08:00.598852355 +0000 UTC m=+2.986614913,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.148434 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897239031d950d3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:00.612167891 +0000 UTC m=+2.999930449,LastTimestamp:2026-02-24 09:08:00.612167891 +0000 UTC m=+2.999930449,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.153235 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18972390322d2df6 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:00.61766399 +0000 UTC m=+3.005426558,LastTimestamp:2026-02-24 09:08:00.61766399 +0000 UTC m=+3.005426558,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.155217 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189723903272a52e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:00.622216494 +0000 UTC m=+3.009979052,LastTimestamp:2026-02-24 09:08:00.622216494 +0000 UTC m=+3.009979052,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.158440 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897239032a6d071 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:00.625635441 +0000 UTC m=+3.013397999,LastTimestamp:2026-02-24 09:08:00.625635441 +0000 UTC m=+3.013397999,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.163151 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897239032b303c3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:00.626435011 +0000 UTC m=+3.014197569,LastTimestamp:2026-02-24 09:08:00.626435011 +0000 UTC m=+3.014197569,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.169492 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897239032b8e0bd openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:00.626819261 +0000 UTC m=+3.014581819,LastTimestamp:2026-02-24 09:08:00.626819261 +0000 UTC m=+3.014581819,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.175792 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897239032ca2c45 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:00.627952709 +0000 UTC m=+3.015715267,LastTimestamp:2026-02-24 09:08:00.627952709 +0000 UTC m=+3.015715267,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.183834 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18972390345534ff openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:00.653841663 +0000 UTC m=+3.041604211,LastTimestamp:2026-02-24 09:08:00.653841663 +0000 UTC m=+3.041604211,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.190811 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897239034696a53 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:00.655166035 +0000 UTC m=+3.042928593,LastTimestamp:2026-02-24 09:08:00.655166035 +0000 UTC m=+3.042928593,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.199119 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897239034db9687 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:00.662648455 +0000 UTC m=+3.050411023,LastTimestamp:2026-02-24 09:08:00.662648455 +0000 UTC m=+3.050411023,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.206957 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897239034ded158 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:00.66286012 +0000 UTC m=+3.050622678,LastTimestamp:2026-02-24 09:08:00.66286012 +0000 UTC m=+3.050622678,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.213401 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189723903d431b6a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:00.80365041 +0000 UTC m=+3.191412978,LastTimestamp:2026-02-24 09:08:00.80365041 +0000 UTC m=+3.191412978,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.220425 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189723903e149269 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:00.817377897 +0000 UTC m=+3.205140465,LastTimestamp:2026-02-24 09:08:00.817377897 +0000 UTC m=+3.205140465,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.224159 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189723903e225ef5 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:00.818282229 +0000 UTC m=+3.206044797,LastTimestamp:2026-02-24 09:08:00.818282229 +0000 UTC m=+3.206044797,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.230261 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189723903e3ac810 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:00.819882 +0000 UTC m=+3.207644568,LastTimestamp:2026-02-24 09:08:00.819882 +0000 UTC m=+3.207644568,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.233554 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189723903f2ccb4c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:00.83574254 +0000 UTC m=+3.223505098,LastTimestamp:2026-02-24 09:08:00.83574254 +0000 UTC m=+3.223505098,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.237973 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189723903f487d97 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:00.837557655 +0000 UTC m=+3.225320213,LastTimestamp:2026-02-24 09:08:00.837557655 +0000 UTC m=+3.225320213,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.242032 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189723904cc44d0d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:01.063775501 +0000 UTC m=+3.451538049,LastTimestamp:2026-02-24 09:08:01.063775501 +0000 UTC m=+3.451538049,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.245769 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189723904cd6bd35 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:01.064983861 +0000 UTC m=+3.452746449,LastTimestamp:2026-02-24 09:08:01.064983861 +0000 UTC m=+3.452746449,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.250171 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189723904da4b5cd openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:01.078482381 +0000 UTC m=+3.466244929,LastTimestamp:2026-02-24 09:08:01.078482381 +0000 UTC m=+3.466244929,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.253748 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189723904dcb3071 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:01.081004145 +0000 UTC m=+3.468766733,LastTimestamp:2026-02-24 09:08:01.081004145 +0000 UTC m=+3.468766733,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.256869 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189723904dddec31 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:01.082231857 +0000 UTC m=+3.469994445,LastTimestamp:2026-02-24 09:08:01.082231857 +0000 UTC m=+3.469994445,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.260602 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18972390572c2bf5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:01.238354933 +0000 UTC m=+3.626117481,LastTimestamp:2026-02-24 09:08:01.238354933 +0000 UTC m=+3.626117481,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.264265 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897239057d83f6c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:01.249632108 +0000 UTC m=+3.637394656,LastTimestamp:2026-02-24 09:08:01.249632108 +0000 UTC m=+3.637394656,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: I0224 09:08:33.267938 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.268317 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897239057e67cde openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:01.250565342 +0000 UTC m=+3.638327890,LastTimestamp:2026-02-24 09:08:01.250565342 +0000 UTC m=+3.638327890,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.269712 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189723905fe139d3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:01.384438227 +0000 UTC m=+3.772200775,LastTimestamp:2026-02-24 09:08:01.384438227 +0000 UTC m=+3.772200775,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.274588 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897239064f38874 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:01.469524084 +0000 UTC m=+3.857286632,LastTimestamp:2026-02-24 09:08:01.469524084 +0000 UTC m=+3.857286632,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.278272 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897239065cefc9e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:01.483906206 +0000 UTC m=+3.871668754,LastTimestamp:2026-02-24 09:08:01.483906206 +0000 UTC m=+3.871668754,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.281982 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189723906974478f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:01.545070479 +0000 UTC m=+3.932833027,LastTimestamp:2026-02-24 09:08:01.545070479 +0000 UTC m=+3.932833027,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.286380 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897239069ecfb35 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:01.552980789 +0000 UTC m=+3.940743337,LastTimestamp:2026-02-24 09:08:01.552980789 +0000 UTC m=+3.940743337,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.291079 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189723909e31d187 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:02.429907335 +0000 UTC m=+4.817669913,LastTimestamp:2026-02-24 09:08:02.429907335 +0000 UTC m=+4.817669913,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.295958 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18972390ae57b226 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:02.700825126 +0000 UTC m=+5.088587704,LastTimestamp:2026-02-24 09:08:02.700825126 +0000 UTC m=+5.088587704,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.299434 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18972390af08a5f7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:02.712421879 +0000 UTC m=+5.100184457,LastTimestamp:2026-02-24 09:08:02.712421879 +0000 UTC m=+5.100184457,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.303686 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18972390af21b661 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:02.714064481 +0000 UTC m=+5.101827059,LastTimestamp:2026-02-24 09:08:02.714064481 +0000 UTC m=+5.101827059,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.307529 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18972390bf39e6c6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:02.98408519 +0000 UTC m=+5.371847768,LastTimestamp:2026-02-24 09:08:02.98408519 +0000 UTC m=+5.371847768,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.312274 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18972390c0749145 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:03.004707141 +0000 UTC m=+5.392469729,LastTimestamp:2026-02-24 09:08:03.004707141 +0000 UTC m=+5.392469729,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.315656 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18972390c08cff4b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:03.006308171 +0000 UTC m=+5.394070749,LastTimestamp:2026-02-24 09:08:03.006308171 +0000 UTC m=+5.394070749,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.319325 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18972390d0f0305e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:03.281244254 +0000 UTC m=+5.669006812,LastTimestamp:2026-02-24 09:08:03.281244254 +0000 UTC m=+5.669006812,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.323592 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18972390d1b9bafe openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:03.294452478 +0000 UTC m=+5.682215066,LastTimestamp:2026-02-24 09:08:03.294452478 +0000 UTC m=+5.682215066,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.326973 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18972390d1cf2840 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:03.295856704 +0000 UTC m=+5.683619262,LastTimestamp:2026-02-24 09:08:03.295856704 +0000 UTC m=+5.683619262,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.331882 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18972390defba204 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:03.516875268 +0000 UTC m=+5.904637826,LastTimestamp:2026-02-24 09:08:03.516875268 +0000 UTC m=+5.904637826,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.338323 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18972390dfcb1d6a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:03.53047281 +0000 UTC m=+5.918235398,LastTimestamp:2026-02-24 09:08:03.53047281 +0000 UTC m=+5.918235398,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.343939 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18972390dfe050f9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:03.531862265 +0000 UTC m=+5.919624853,LastTimestamp:2026-02-24 09:08:03.531862265 +0000 UTC m=+5.919624853,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.348284 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18972390ee2749ca openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:03.771394506 +0000 UTC m=+6.159157104,LastTimestamp:2026-02-24 09:08:03.771394506 +0000 UTC m=+6.159157104,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.352472 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18972390ef259c50 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:03.788061776 +0000 UTC m=+6.175824364,LastTimestamp:2026-02-24 09:08:03.788061776 +0000 UTC m=+6.175824364,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.356068 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 24 09:08:33 crc kubenswrapper[4822]: &Event{ObjectMeta:{kube-controller-manager-crc.189723911abec651 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Feb 24 09:08:33 crc kubenswrapper[4822]: body: Feb 24 09:08:33 crc kubenswrapper[4822]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:04.519519825 +0000 UTC m=+6.907282403,LastTimestamp:2026-02-24 09:08:04.519519825 +0000 UTC m=+6.907282403,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 24 09:08:33 crc kubenswrapper[4822]: > Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.357607 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189723911ac02935 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:04.519610677 +0000 UTC m=+6.907373255,LastTimestamp:2026-02-24 09:08:04.519610677 +0000 UTC m=+6.907373255,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.363899 4822 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897239057e67cde\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897239057e67cde openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:01.250565342 +0000 UTC m=+3.638327890,LastTimestamp:2026-02-24 09:08:13.482515839 +0000 UTC m=+15.870278417,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.367655 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 24 09:08:33 crc kubenswrapper[4822]: &Event{ObjectMeta:{kube-apiserver-crc.189723933d291f7b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 24 09:08:33 crc kubenswrapper[4822]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 24 09:08:33 crc kubenswrapper[4822]: Feb 24 09:08:33 crc kubenswrapper[4822]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:13.686849403 +0000 UTC m=+16.074611961,LastTimestamp:2026-02-24 09:08:13.686849403 +0000 UTC m=+16.074611961,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 24 09:08:33 crc kubenswrapper[4822]: > Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.371172 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189723933d29dbf7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:13.686897655 +0000 UTC m=+16.074660223,LastTimestamp:2026-02-24 09:08:13.686897655 +0000 UTC m=+16.074660223,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.375146 4822 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189723933d291f7b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 24 09:08:33 crc kubenswrapper[4822]: &Event{ObjectMeta:{kube-apiserver-crc.189723933d291f7b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 24 09:08:33 crc kubenswrapper[4822]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 24 09:08:33 crc kubenswrapper[4822]: Feb 24 09:08:33 crc kubenswrapper[4822]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:13.686849403 +0000 UTC m=+16.074611961,LastTimestamp:2026-02-24 09:08:13.692718721 +0000 UTC m=+16.080481279,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 24 09:08:33 crc kubenswrapper[4822]: > Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.378992 4822 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189723933d29dbf7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189723933d29dbf7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:13.686897655 +0000 UTC m=+16.074660223,LastTimestamp:2026-02-24 09:08:13.692794073 +0000 UTC m=+16.080556641,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.383345 4822 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897239064f38874\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897239064f38874 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:01.469524084 +0000 UTC m=+3.857286632,LastTimestamp:2026-02-24 09:08:13.773726751 +0000 UTC m=+16.161489299,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.387540 4822 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897239065cefc9e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897239065cefc9e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:01.483906206 +0000 UTC m=+3.871668754,LastTimestamp:2026-02-24 09:08:13.787469329 +0000 UTC m=+16.175231877,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.392716 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 24 09:08:33 crc kubenswrapper[4822]: &Event{ObjectMeta:{kube-controller-manager-crc.189723936ec3c9c9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 24 09:08:33 crc kubenswrapper[4822]: body: Feb 24 09:08:33 crc kubenswrapper[4822]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:14.519069129 +0000 UTC m=+16.906831707,LastTimestamp:2026-02-24 09:08:14.519069129 +0000 UTC m=+16.906831707,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 24 09:08:33 crc kubenswrapper[4822]: > Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.397066 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189723936ec57600 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:14.519178752 +0000 UTC m=+16.906941310,LastTimestamp:2026-02-24 09:08:14.519178752 +0000 UTC m=+16.906941310,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.403807 4822 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189723936ec3c9c9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 24 09:08:33 crc kubenswrapper[4822]: &Event{ObjectMeta:{kube-controller-manager-crc.189723936ec3c9c9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 24 09:08:33 crc kubenswrapper[4822]: body: Feb 24 09:08:33 crc kubenswrapper[4822]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:14.519069129 +0000 UTC m=+16.906831707,LastTimestamp:2026-02-24 09:08:24.520046372 +0000 UTC m=+26.907808960,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 24 09:08:33 crc kubenswrapper[4822]: > Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.408221 4822 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189723936ec57600\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189723936ec57600 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:14.519178752 +0000 UTC m=+16.906941310,LastTimestamp:2026-02-24 09:08:24.520136955 +0000 UTC m=+26.907899543,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.412393 4822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18972395c3196aaf openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:24.523901615 +0000 UTC m=+26.911664163,LastTimestamp:2026-02-24 09:08:24.523901615 +0000 UTC m=+26.911664163,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.418726 4822 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897238ffb90fda2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897238ffb90fda2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:07:59.701458338 +0000 UTC m=+2.089220926,LastTimestamp:2026-02-24 09:08:24.646308425 +0000 UTC m=+27.034071003,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.425630 4822 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189723900f0d5a4c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189723900f0d5a4c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:00.028375628 +0000 UTC m=+2.416138216,LastTimestamp:2026-02-24 09:08:24.902474889 +0000 UTC m=+27.290237467,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.430041 4822 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189723901204b515\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189723901204b515 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:00.078140693 +0000 UTC m=+2.465903271,LastTimestamp:2026-02-24 09:08:24.915136467 +0000 UTC m=+27.302899025,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:33 crc kubenswrapper[4822]: I0224 09:08:33.569667 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 24 09:08:33 crc kubenswrapper[4822]: I0224 09:08:33.570478 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 24 09:08:33 crc kubenswrapper[4822]: I0224 09:08:33.572965 4822 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f8c4965db80b27049d45170c3bb7c413c4332d1f2c70ec5b46abdb5ed24d3b78" exitCode=255 Feb 24 09:08:33 crc kubenswrapper[4822]: I0224 09:08:33.573013 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f8c4965db80b27049d45170c3bb7c413c4332d1f2c70ec5b46abdb5ed24d3b78"} Feb 24 09:08:33 crc kubenswrapper[4822]: I0224 09:08:33.573063 4822 scope.go:117] "RemoveContainer" containerID="08a43fc1b9042ae6f1841cb9ef1d2e3297e24addb5dc3e4d48cc676c098c8893" Feb 24 09:08:33 crc kubenswrapper[4822]: I0224 09:08:33.573270 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:33 crc kubenswrapper[4822]: I0224 09:08:33.574741 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:33 crc kubenswrapper[4822]: I0224 09:08:33.574795 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:33 crc kubenswrapper[4822]: I0224 09:08:33.574816 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:33 crc kubenswrapper[4822]: I0224 09:08:33.575858 4822 scope.go:117] "RemoveContainer" containerID="f8c4965db80b27049d45170c3bb7c413c4332d1f2c70ec5b46abdb5ed24d3b78" Feb 24 09:08:33 crc kubenswrapper[4822]: E0224 09:08:33.576220 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:08:34 crc kubenswrapper[4822]: I0224 09:08:34.107976 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:34 crc kubenswrapper[4822]: I0224 09:08:34.110105 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:34 crc kubenswrapper[4822]: I0224 09:08:34.110168 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:34 crc kubenswrapper[4822]: I0224 09:08:34.110185 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:34 crc kubenswrapper[4822]: I0224 09:08:34.110236 4822 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:08:34 crc kubenswrapper[4822]: E0224 09:08:34.111316 4822 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 24 09:08:34 crc kubenswrapper[4822]: E0224 09:08:34.112715 4822 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 24 09:08:34 crc kubenswrapper[4822]: I0224 09:08:34.273240 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:08:34 crc kubenswrapper[4822]: I0224 09:08:34.520070 4822 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 09:08:34 crc kubenswrapper[4822]: I0224 09:08:34.520185 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 09:08:34 crc kubenswrapper[4822]: E0224 09:08:34.529169 4822 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189723936ec3c9c9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 24 09:08:34 crc kubenswrapper[4822]: &Event{ObjectMeta:{kube-controller-manager-crc.189723936ec3c9c9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 24 09:08:34 crc kubenswrapper[4822]: body: Feb 24 09:08:34 crc kubenswrapper[4822]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:14.519069129 +0000 UTC m=+16.906831707,LastTimestamp:2026-02-24 09:08:34.520156772 +0000 UTC m=+36.907919360,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 24 09:08:34 crc kubenswrapper[4822]: > Feb 24 09:08:34 crc kubenswrapper[4822]: E0224 09:08:34.537116 4822 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189723936ec57600\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189723936ec57600 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:08:14.519178752 +0000 UTC m=+16.906941310,LastTimestamp:2026-02-24 09:08:34.520234064 +0000 UTC m=+36.907996652,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:08:34 crc kubenswrapper[4822]: I0224 09:08:34.579897 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 24 09:08:35 crc kubenswrapper[4822]: I0224 09:08:35.272769 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:08:35 crc kubenswrapper[4822]: I0224 09:08:35.790330 4822 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:08:35 crc kubenswrapper[4822]: I0224 09:08:35.790634 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:35 crc kubenswrapper[4822]: I0224 09:08:35.792747 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:35 crc kubenswrapper[4822]: I0224 09:08:35.792838 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:35 crc kubenswrapper[4822]: I0224 09:08:35.792865 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:35 crc kubenswrapper[4822]: I0224 09:08:35.793894 4822 scope.go:117] "RemoveContainer" containerID="f8c4965db80b27049d45170c3bb7c413c4332d1f2c70ec5b46abdb5ed24d3b78" Feb 24 09:08:35 crc kubenswrapper[4822]: E0224 09:08:35.794338 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:08:36 crc kubenswrapper[4822]: I0224 09:08:36.006298 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:08:36 crc kubenswrapper[4822]: I0224 09:08:36.273365 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:08:36 crc kubenswrapper[4822]: I0224 09:08:36.590550 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:36 crc kubenswrapper[4822]: I0224 09:08:36.591855 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:36 crc kubenswrapper[4822]: I0224 09:08:36.591958 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:36 crc kubenswrapper[4822]: I0224 09:08:36.591986 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:36 crc kubenswrapper[4822]: I0224 09:08:36.593036 4822 scope.go:117] "RemoveContainer" containerID="f8c4965db80b27049d45170c3bb7c413c4332d1f2c70ec5b46abdb5ed24d3b78" Feb 24 09:08:36 crc kubenswrapper[4822]: E0224 09:08:36.593342 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:08:38 crc kubenswrapper[4822]: I0224 09:08:37.272291 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:08:38 crc kubenswrapper[4822]: I0224 09:08:38.272900 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:08:38 crc kubenswrapper[4822]: I0224 09:08:38.298429 4822 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 24 09:08:38 crc kubenswrapper[4822]: I0224 09:08:38.323449 4822 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 24 09:08:38 crc kubenswrapper[4822]: E0224 09:08:38.481977 4822 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 09:08:38 crc kubenswrapper[4822]: W0224 09:08:38.492873 4822 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 24 09:08:38 crc kubenswrapper[4822]: E0224 09:08:38.493000 4822 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 24 09:08:39 crc kubenswrapper[4822]: I0224 09:08:39.272157 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:08:40 crc kubenswrapper[4822]: I0224 09:08:40.271907 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:08:40 crc kubenswrapper[4822]: W0224 09:08:40.861705 4822 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 24 09:08:40 crc kubenswrapper[4822]: E0224 09:08:40.861799 4822 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 24 09:08:41 crc kubenswrapper[4822]: I0224 09:08:41.113151 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:41 crc kubenswrapper[4822]: I0224 09:08:41.115824 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:41 crc kubenswrapper[4822]: I0224 09:08:41.115967 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:41 crc kubenswrapper[4822]: I0224 09:08:41.115999 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:41 crc kubenswrapper[4822]: I0224 09:08:41.116111 4822 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:08:41 crc kubenswrapper[4822]: E0224 09:08:41.120410 4822 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 24 09:08:41 crc kubenswrapper[4822]: E0224 09:08:41.122013 4822 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 24 09:08:41 crc kubenswrapper[4822]: I0224 09:08:41.273238 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:08:41 crc kubenswrapper[4822]: I0224 09:08:41.529682 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:08:41 crc kubenswrapper[4822]: I0224 09:08:41.530044 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:41 crc kubenswrapper[4822]: I0224 09:08:41.532203 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:41 crc kubenswrapper[4822]: I0224 09:08:41.532277 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:41 crc kubenswrapper[4822]: I0224 09:08:41.532296 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:41 crc kubenswrapper[4822]: I0224 09:08:41.538219 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:08:41 crc kubenswrapper[4822]: I0224 09:08:41.605561 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:41 crc kubenswrapper[4822]: I0224 09:08:41.607067 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:41 crc kubenswrapper[4822]: I0224 09:08:41.607139 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:41 crc kubenswrapper[4822]: I0224 09:08:41.607156 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:41 crc kubenswrapper[4822]: W0224 09:08:41.791085 4822 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 24 09:08:41 crc kubenswrapper[4822]: E0224 09:08:41.791194 4822 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 24 09:08:42 crc kubenswrapper[4822]: I0224 09:08:42.272796 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:08:43 crc kubenswrapper[4822]: I0224 09:08:43.269699 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:08:44 crc kubenswrapper[4822]: I0224 09:08:44.268932 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:08:44 crc kubenswrapper[4822]: W0224 09:08:44.397999 4822 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 24 09:08:44 crc kubenswrapper[4822]: E0224 09:08:44.398080 4822 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 24 09:08:45 crc kubenswrapper[4822]: I0224 09:08:45.274742 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:08:46 crc kubenswrapper[4822]: I0224 09:08:46.272285 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:08:47 crc kubenswrapper[4822]: I0224 09:08:47.274225 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:08:48 crc kubenswrapper[4822]: I0224 09:08:48.121516 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:48 crc kubenswrapper[4822]: I0224 09:08:48.123418 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:48 crc kubenswrapper[4822]: I0224 09:08:48.123472 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:48 crc kubenswrapper[4822]: I0224 09:08:48.123487 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:48 crc kubenswrapper[4822]: I0224 09:08:48.123522 4822 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:08:48 crc kubenswrapper[4822]: E0224 09:08:48.127721 4822 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 24 09:08:48 crc kubenswrapper[4822]: E0224 09:08:48.132001 4822 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 24 09:08:48 crc kubenswrapper[4822]: I0224 09:08:48.273433 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:08:48 crc kubenswrapper[4822]: E0224 09:08:48.482119 4822 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 09:08:49 crc kubenswrapper[4822]: I0224 09:08:49.273321 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:08:50 crc kubenswrapper[4822]: I0224 09:08:50.272736 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:08:50 crc kubenswrapper[4822]: I0224 09:08:50.336638 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:50 crc kubenswrapper[4822]: I0224 09:08:50.338055 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:50 crc kubenswrapper[4822]: I0224 09:08:50.338133 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:50 crc kubenswrapper[4822]: I0224 09:08:50.338153 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:50 crc kubenswrapper[4822]: I0224 09:08:50.338898 4822 scope.go:117] "RemoveContainer" containerID="f8c4965db80b27049d45170c3bb7c413c4332d1f2c70ec5b46abdb5ed24d3b78" Feb 24 09:08:50 crc kubenswrapper[4822]: E0224 09:08:50.339208 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:08:50 crc kubenswrapper[4822]: I0224 09:08:50.507552 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 09:08:50 crc kubenswrapper[4822]: I0224 09:08:50.507777 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:50 crc kubenswrapper[4822]: I0224 09:08:50.509283 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:50 crc kubenswrapper[4822]: I0224 09:08:50.509331 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:50 crc kubenswrapper[4822]: I0224 09:08:50.509350 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:51 crc kubenswrapper[4822]: I0224 09:08:51.269956 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:08:52 crc kubenswrapper[4822]: I0224 09:08:52.271830 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:08:53 crc kubenswrapper[4822]: I0224 09:08:53.272790 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:08:54 crc kubenswrapper[4822]: I0224 09:08:54.273389 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:08:55 crc kubenswrapper[4822]: I0224 09:08:55.128802 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:08:55 crc kubenswrapper[4822]: I0224 09:08:55.130585 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:08:55 crc kubenswrapper[4822]: I0224 09:08:55.130800 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:08:55 crc kubenswrapper[4822]: I0224 09:08:55.131268 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:08:55 crc kubenswrapper[4822]: I0224 09:08:55.131679 4822 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:08:55 crc kubenswrapper[4822]: E0224 09:08:55.138887 4822 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 24 09:08:55 crc kubenswrapper[4822]: E0224 09:08:55.139481 4822 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 24 09:08:55 crc kubenswrapper[4822]: I0224 09:08:55.272195 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:08:56 crc kubenswrapper[4822]: I0224 09:08:56.273656 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:08:57 crc kubenswrapper[4822]: I0224 09:08:57.272798 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:08:58 crc kubenswrapper[4822]: I0224 09:08:58.273075 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:08:58 crc kubenswrapper[4822]: E0224 09:08:58.482438 4822 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 09:08:59 crc kubenswrapper[4822]: I0224 09:08:59.271682 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:09:00 crc kubenswrapper[4822]: I0224 09:09:00.271734 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:09:01 crc kubenswrapper[4822]: I0224 09:09:01.272365 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:09:01 crc kubenswrapper[4822]: I0224 09:09:01.336314 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:09:01 crc kubenswrapper[4822]: I0224 09:09:01.337573 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:01 crc kubenswrapper[4822]: I0224 09:09:01.337614 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:01 crc kubenswrapper[4822]: I0224 09:09:01.337624 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:01 crc kubenswrapper[4822]: I0224 09:09:01.338260 4822 scope.go:117] "RemoveContainer" containerID="f8c4965db80b27049d45170c3bb7c413c4332d1f2c70ec5b46abdb5ed24d3b78" Feb 24 09:09:01 crc kubenswrapper[4822]: I0224 09:09:01.668362 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 24 09:09:01 crc kubenswrapper[4822]: I0224 09:09:01.672084 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344"} Feb 24 09:09:01 crc kubenswrapper[4822]: I0224 09:09:01.672306 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:09:01 crc kubenswrapper[4822]: I0224 09:09:01.673697 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:01 crc kubenswrapper[4822]: I0224 09:09:01.673772 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:01 crc kubenswrapper[4822]: I0224 09:09:01.673791 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:02 crc kubenswrapper[4822]: I0224 09:09:02.140134 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:09:02 crc kubenswrapper[4822]: I0224 09:09:02.141763 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:02 crc kubenswrapper[4822]: I0224 09:09:02.141815 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:02 crc kubenswrapper[4822]: I0224 09:09:02.141833 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:02 crc kubenswrapper[4822]: I0224 09:09:02.141867 4822 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:09:02 crc kubenswrapper[4822]: E0224 09:09:02.144939 4822 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 24 09:09:02 crc kubenswrapper[4822]: E0224 09:09:02.144975 4822 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 24 09:09:02 crc kubenswrapper[4822]: I0224 09:09:02.269234 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:09:02 crc kubenswrapper[4822]: I0224 09:09:02.678586 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 24 09:09:02 crc kubenswrapper[4822]: I0224 09:09:02.679631 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 24 09:09:02 crc kubenswrapper[4822]: I0224 09:09:02.683581 4822 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344" exitCode=255 Feb 24 09:09:02 crc kubenswrapper[4822]: I0224 09:09:02.683666 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344"} Feb 24 09:09:02 crc kubenswrapper[4822]: I0224 09:09:02.683821 4822 scope.go:117] "RemoveContainer" containerID="f8c4965db80b27049d45170c3bb7c413c4332d1f2c70ec5b46abdb5ed24d3b78" Feb 24 09:09:02 crc kubenswrapper[4822]: I0224 09:09:02.683979 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:09:02 crc kubenswrapper[4822]: I0224 09:09:02.686060 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:02 crc kubenswrapper[4822]: I0224 09:09:02.686130 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:02 crc kubenswrapper[4822]: I0224 09:09:02.686161 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:02 crc kubenswrapper[4822]: I0224 09:09:02.687359 4822 scope.go:117] "RemoveContainer" containerID="e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344" Feb 24 09:09:02 crc kubenswrapper[4822]: E0224 09:09:02.687790 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:09:03 crc kubenswrapper[4822]: I0224 09:09:03.269849 4822 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:09:03 crc kubenswrapper[4822]: I0224 09:09:03.723975 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 24 09:09:03 crc kubenswrapper[4822]: I0224 09:09:03.852130 4822 csr.go:261] certificate signing request csr-vbn8t is approved, waiting to be issued Feb 24 09:09:03 crc kubenswrapper[4822]: I0224 09:09:03.860751 4822 csr.go:257] certificate signing request csr-vbn8t is issued Feb 24 09:09:03 crc kubenswrapper[4822]: I0224 09:09:03.954118 4822 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 24 09:09:04 crc kubenswrapper[4822]: I0224 09:09:04.100375 4822 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 24 09:09:04 crc kubenswrapper[4822]: I0224 09:09:04.862800 4822 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-25 01:21:23.233276892 +0000 UTC Feb 24 09:09:04 crc kubenswrapper[4822]: I0224 09:09:04.862846 4822 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6568h12m18.370433491s for next certificate rotation Feb 24 09:09:05 crc kubenswrapper[4822]: I0224 09:09:05.790692 4822 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:09:05 crc kubenswrapper[4822]: I0224 09:09:05.790981 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:09:05 crc kubenswrapper[4822]: I0224 09:09:05.792218 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:05 crc kubenswrapper[4822]: I0224 09:09:05.792267 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:05 crc kubenswrapper[4822]: I0224 09:09:05.792279 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:05 crc kubenswrapper[4822]: I0224 09:09:05.792911 4822 scope.go:117] "RemoveContainer" containerID="e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344" Feb 24 09:09:05 crc kubenswrapper[4822]: E0224 09:09:05.793130 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:09:06 crc kubenswrapper[4822]: I0224 09:09:06.007029 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:09:06 crc kubenswrapper[4822]: I0224 09:09:06.734996 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:09:06 crc kubenswrapper[4822]: I0224 09:09:06.736077 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:06 crc kubenswrapper[4822]: I0224 09:09:06.736126 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:06 crc kubenswrapper[4822]: I0224 09:09:06.736137 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:06 crc kubenswrapper[4822]: I0224 09:09:06.736900 4822 scope.go:117] "RemoveContainer" containerID="e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344" Feb 24 09:09:06 crc kubenswrapper[4822]: E0224 09:09:06.737135 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:09:08 crc kubenswrapper[4822]: E0224 09:09:08.483821 4822 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 09:09:09 crc kubenswrapper[4822]: I0224 09:09:09.145122 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:09:09 crc kubenswrapper[4822]: I0224 09:09:09.147007 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:09 crc kubenswrapper[4822]: I0224 09:09:09.147164 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:09 crc kubenswrapper[4822]: I0224 09:09:09.147185 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:09 crc kubenswrapper[4822]: I0224 09:09:09.147437 4822 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:09:09 crc kubenswrapper[4822]: I0224 09:09:09.160642 4822 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 24 09:09:09 crc kubenswrapper[4822]: I0224 09:09:09.160941 4822 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 24 09:09:09 crc kubenswrapper[4822]: E0224 09:09:09.160972 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 24 09:09:09 crc kubenswrapper[4822]: I0224 09:09:09.164808 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:09 crc kubenswrapper[4822]: I0224 09:09:09.164847 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:09 crc kubenswrapper[4822]: I0224 09:09:09.164856 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:09 crc kubenswrapper[4822]: I0224 09:09:09.164872 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:09 crc kubenswrapper[4822]: I0224 09:09:09.164882 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:09Z","lastTransitionTime":"2026-02-24T09:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:09 crc kubenswrapper[4822]: E0224 09:09:09.185175 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:09 crc kubenswrapper[4822]: I0224 09:09:09.195466 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:09 crc kubenswrapper[4822]: I0224 09:09:09.195535 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:09 crc kubenswrapper[4822]: I0224 09:09:09.195556 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:09 crc kubenswrapper[4822]: I0224 09:09:09.195582 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:09 crc kubenswrapper[4822]: I0224 09:09:09.195599 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:09Z","lastTransitionTime":"2026-02-24T09:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:09 crc kubenswrapper[4822]: E0224 09:09:09.210307 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:09 crc kubenswrapper[4822]: I0224 09:09:09.222297 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:09 crc kubenswrapper[4822]: I0224 09:09:09.222403 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:09 crc kubenswrapper[4822]: I0224 09:09:09.222426 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:09 crc kubenswrapper[4822]: I0224 09:09:09.222489 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:09 crc kubenswrapper[4822]: I0224 09:09:09.222511 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:09Z","lastTransitionTime":"2026-02-24T09:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:09 crc kubenswrapper[4822]: E0224 09:09:09.237801 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:09 crc kubenswrapper[4822]: I0224 09:09:09.258345 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:09 crc kubenswrapper[4822]: I0224 09:09:09.258415 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:09 crc kubenswrapper[4822]: I0224 09:09:09.258437 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:09 crc kubenswrapper[4822]: I0224 09:09:09.258468 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:09 crc kubenswrapper[4822]: I0224 09:09:09.258486 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:09Z","lastTransitionTime":"2026-02-24T09:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:09 crc kubenswrapper[4822]: E0224 09:09:09.286383 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:09 crc kubenswrapper[4822]: E0224 09:09:09.286639 4822 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 09:09:09 crc kubenswrapper[4822]: E0224 09:09:09.286685 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:09 crc kubenswrapper[4822]: I0224 09:09:09.336564 4822 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:09:09 crc kubenswrapper[4822]: I0224 09:09:09.338235 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:09 crc kubenswrapper[4822]: I0224 09:09:09.338305 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:09 crc kubenswrapper[4822]: I0224 09:09:09.338326 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:09 crc kubenswrapper[4822]: E0224 09:09:09.387279 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:09 crc kubenswrapper[4822]: E0224 09:09:09.487983 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:09 crc kubenswrapper[4822]: E0224 09:09:09.588938 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:09 crc kubenswrapper[4822]: E0224 09:09:09.690067 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:09 crc kubenswrapper[4822]: E0224 09:09:09.790606 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:09 crc kubenswrapper[4822]: E0224 09:09:09.891775 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:09 crc kubenswrapper[4822]: E0224 09:09:09.992877 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:10 crc kubenswrapper[4822]: E0224 09:09:10.093439 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:10 crc kubenswrapper[4822]: E0224 09:09:10.194001 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:10 crc kubenswrapper[4822]: E0224 09:09:10.294253 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:10 crc kubenswrapper[4822]: E0224 09:09:10.394658 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:10 crc kubenswrapper[4822]: E0224 09:09:10.494763 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:10 crc kubenswrapper[4822]: E0224 09:09:10.595588 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:10 crc kubenswrapper[4822]: E0224 09:09:10.695990 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:10 crc kubenswrapper[4822]: E0224 09:09:10.796519 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:10 crc kubenswrapper[4822]: E0224 09:09:10.896994 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:10 crc kubenswrapper[4822]: E0224 09:09:10.997762 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:11 crc kubenswrapper[4822]: E0224 09:09:11.099002 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:11 crc kubenswrapper[4822]: E0224 09:09:11.199876 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:11 crc kubenswrapper[4822]: E0224 09:09:11.300690 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:11 crc kubenswrapper[4822]: E0224 09:09:11.401177 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:11 crc kubenswrapper[4822]: E0224 09:09:11.502136 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:11 crc kubenswrapper[4822]: E0224 09:09:11.602693 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:11 crc kubenswrapper[4822]: E0224 09:09:11.703203 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:11 crc kubenswrapper[4822]: E0224 09:09:11.804273 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:11 crc kubenswrapper[4822]: E0224 09:09:11.905111 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:12 crc kubenswrapper[4822]: E0224 09:09:12.005497 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:12 crc kubenswrapper[4822]: E0224 09:09:12.106118 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:12 crc kubenswrapper[4822]: E0224 09:09:12.206614 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:12 crc kubenswrapper[4822]: E0224 09:09:12.307025 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:12 crc kubenswrapper[4822]: E0224 09:09:12.408009 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:12 crc kubenswrapper[4822]: E0224 09:09:12.509093 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:12 crc kubenswrapper[4822]: E0224 09:09:12.609524 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:12 crc kubenswrapper[4822]: E0224 09:09:12.709778 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:12 crc kubenswrapper[4822]: E0224 09:09:12.810697 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:12 crc kubenswrapper[4822]: E0224 09:09:12.911659 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:13 crc kubenswrapper[4822]: E0224 09:09:13.012582 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:13 crc kubenswrapper[4822]: E0224 09:09:13.113330 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:13 crc kubenswrapper[4822]: E0224 09:09:13.213521 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:13 crc kubenswrapper[4822]: E0224 09:09:13.313977 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:13 crc kubenswrapper[4822]: E0224 09:09:13.414730 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:13 crc kubenswrapper[4822]: E0224 09:09:13.515564 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:13 crc kubenswrapper[4822]: E0224 09:09:13.616005 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:13 crc kubenswrapper[4822]: E0224 09:09:13.716362 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:13 crc kubenswrapper[4822]: E0224 09:09:13.816736 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:13 crc kubenswrapper[4822]: E0224 09:09:13.916858 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:14 crc kubenswrapper[4822]: E0224 09:09:14.017357 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:14 crc kubenswrapper[4822]: E0224 09:09:14.118121 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:14 crc kubenswrapper[4822]: E0224 09:09:14.218885 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:14 crc kubenswrapper[4822]: I0224 09:09:14.235703 4822 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 24 09:09:14 crc kubenswrapper[4822]: E0224 09:09:14.319223 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:14 crc kubenswrapper[4822]: E0224 09:09:14.420034 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:14 crc kubenswrapper[4822]: E0224 09:09:14.520120 4822 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:09:14 crc kubenswrapper[4822]: I0224 09:09:14.601157 4822 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 24 09:09:14 crc kubenswrapper[4822]: I0224 09:09:14.623103 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:14 crc kubenswrapper[4822]: I0224 09:09:14.623180 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:14 crc kubenswrapper[4822]: I0224 09:09:14.623202 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:14 crc kubenswrapper[4822]: I0224 09:09:14.623238 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:14 crc kubenswrapper[4822]: I0224 09:09:14.623262 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:14Z","lastTransitionTime":"2026-02-24T09:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:14 crc kubenswrapper[4822]: I0224 09:09:14.730330 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:14 crc kubenswrapper[4822]: I0224 09:09:14.730421 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:14 crc kubenswrapper[4822]: I0224 09:09:14.730438 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:14 crc kubenswrapper[4822]: I0224 09:09:14.730466 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:14 crc kubenswrapper[4822]: I0224 09:09:14.730484 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:14Z","lastTransitionTime":"2026-02-24T09:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:14 crc kubenswrapper[4822]: I0224 09:09:14.834011 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:14 crc kubenswrapper[4822]: I0224 09:09:14.834082 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:14 crc kubenswrapper[4822]: I0224 09:09:14.834103 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:14 crc kubenswrapper[4822]: I0224 09:09:14.834129 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:14 crc kubenswrapper[4822]: I0224 09:09:14.834209 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:14Z","lastTransitionTime":"2026-02-24T09:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:14 crc kubenswrapper[4822]: I0224 09:09:14.937256 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:14 crc kubenswrapper[4822]: I0224 09:09:14.937332 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:14 crc kubenswrapper[4822]: I0224 09:09:14.937355 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:14 crc kubenswrapper[4822]: I0224 09:09:14.937387 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:14 crc kubenswrapper[4822]: I0224 09:09:14.937410 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:14Z","lastTransitionTime":"2026-02-24T09:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.040101 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.040163 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.040179 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.040206 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.040224 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:15Z","lastTransitionTime":"2026-02-24T09:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.143552 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.143660 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.143683 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.143745 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.143765 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:15Z","lastTransitionTime":"2026-02-24T09:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.246732 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.246793 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.246811 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.246834 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.246852 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:15Z","lastTransitionTime":"2026-02-24T09:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.275713 4822 apiserver.go:52] "Watching apiserver" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.283752 4822 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.284433 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-t2gjf","openshift-machine-config-operator/machine-config-daemon-qd752","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-target-xd92c","openshift-image-registry/node-ca-fbp47","openshift-multus/multus-additional-cni-plugins-cw98v","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-ovn-kubernetes/ovnkube-node-669bp","openshift-multus/multus-lqrzq","openshift-multus/network-metrics-daemon-htbq4"] Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.284955 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.285113 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.285148 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.285203 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.285305 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.285837 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.287020 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.287434 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.287580 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-fbp47" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.287711 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.288265 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.289001 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.289076 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.289139 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-cw98v" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.289157 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.289264 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qd752" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.289355 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-t2gjf" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.289587 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.289668 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.290641 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.291182 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.291859 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.292209 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.293029 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.294073 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.294436 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.294688 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.294834 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.298472 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.298526 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.298870 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.299338 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.299547 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.299760 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.300046 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.300054 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.300071 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.300758 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.300373 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.300684 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.300890 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.301034 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.301110 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.301306 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.301547 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.301613 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.301658 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.301781 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.301827 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.301544 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.302152 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.302384 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.302606 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.303714 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.304559 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.317632 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.333687 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.349236 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.350773 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.350840 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.350859 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.350888 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.350937 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:15Z","lastTransitionTime":"2026-02-24T09:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.368520 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.375175 4822 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.386793 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.402826 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.418907 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.423569 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.423838 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.424125 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.424363 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.424563 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.424760 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.424964 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.425050 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.425386 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.425635 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.425830 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.425716 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.426080 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.426311 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.425850 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.425952 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.426388 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.426413 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.426361 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.426539 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.426580 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.426614 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.426647 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.426654 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.426683 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.426718 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.426797 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.426844 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.426888 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.426946 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.426981 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.427126 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.427158 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.427193 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.427225 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.427261 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.427291 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.427325 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.427359 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.427394 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.427429 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.427465 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.427481 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.427497 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.427650 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.427684 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.427742 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.427796 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.427843 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.427885 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.427967 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.428013 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.428056 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.428140 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.428205 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.428259 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.428310 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.428374 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.428425 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.428470 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.428524 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.428571 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.428617 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.428662 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.428706 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.428750 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.428800 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.428851 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.428906 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.428998 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.429049 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.429095 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.429153 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.429201 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.429252 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.429388 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.429446 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.429491 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.429537 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.429582 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.429629 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.429676 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.429723 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.429768 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.429813 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.429863 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.429949 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.430004 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.430055 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.430101 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.430148 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.430200 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.430249 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.430294 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.430345 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.430391 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.430437 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.430487 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.430533 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.430576 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.430625 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.430661 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.430706 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.430753 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.430823 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.430867 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.430944 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.431002 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.431053 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.431101 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.431151 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.431198 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.431248 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.431297 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.431344 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.431390 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.431445 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.431500 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.431555 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.431607 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.431655 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.431703 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.431748 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.431795 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.432036 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.432098 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.432150 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.432200 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.432249 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.432299 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.432345 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.432392 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.432442 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.432496 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.432546 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.432598 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.432648 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.432698 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.432750 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.432797 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.432846 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.432893 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.432984 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.433035 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.433089 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.433138 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.433185 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.433237 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.433289 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.433338 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.433386 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.433438 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.433490 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.433541 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.433595 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.433645 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.433711 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.433763 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.433814 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.433865 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.433964 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.434023 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.434072 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.435712 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.435779 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.435827 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.435881 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.436149 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.436216 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.436273 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.436331 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.436383 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.436437 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.436488 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.436536 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.436585 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.436639 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.436705 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.436760 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.436817 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.436870 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.437371 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.437449 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.437507 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.437569 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.437623 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.437663 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.437702 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.437741 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.437775 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.437808 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.437844 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.437889 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.437977 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.438031 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.438083 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.438134 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.438175 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.438216 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.438253 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.438300 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.438354 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.438410 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.438462 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.438540 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.438680 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/306aba52-0b6e-4d3f-b05f-757daebc5e24-rootfs\") pod \"machine-config-daemon-qd752\" (UID: \"306aba52-0b6e-4d3f-b05f-757daebc5e24\") " pod="openshift-machine-config-operator/machine-config-daemon-qd752" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.438755 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.438839 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qvql\" (UniqueName: \"kubernetes.io/projected/d5cc2023-21a7-4205-9492-ec1d1a0d146b-kube-api-access-9qvql\") pod \"node-ca-fbp47\" (UID: \"d5cc2023-21a7-4205-9492-ec1d1a0d146b\") " pod="openshift-image-registry/node-ca-fbp47" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.438896 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.438999 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-host-run-netns\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.439051 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/90b654a4-010b-4a5e-b2d8-d42764fcb628-multus-daemon-config\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.439109 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-run-ovn-kubernetes\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.439103 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.439165 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-multus-socket-dir-parent\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.440350 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-etc-kubernetes\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.440520 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0fada5a7-935e-4bd3-931b-082fea67a9ec-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gmrxl\" (UID: \"0fada5a7-935e-4bd3-931b-082fea67a9ec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.440586 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/72f416e6-5647-4b65-b06f-df73aca5e594-ovnkube-config\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.440643 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/08191894-6514-4c09-aab9-e6c8f0f52354-hosts-file\") pod \"node-resolver-t2gjf\" (UID: \"08191894-6514-4c09-aab9-e6c8f0f52354\") " pod="openshift-dns/node-resolver-t2gjf" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.441828 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f22e7eb7-5eca-40b1-b7b8-6683604024ba-system-cni-dir\") pod \"multus-additional-cni-plugins-cw98v\" (UID: \"f22e7eb7-5eca-40b1-b7b8-6683604024ba\") " pod="openshift-multus/multus-additional-cni-plugins-cw98v" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.441995 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f22e7eb7-5eca-40b1-b7b8-6683604024ba-os-release\") pod \"multus-additional-cni-plugins-cw98v\" (UID: \"f22e7eb7-5eca-40b1-b7b8-6683604024ba\") " pod="openshift-multus/multus-additional-cni-plugins-cw98v" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.442061 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.442102 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-os-release\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.442150 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.428145 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.428203 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.428239 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.428737 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.429070 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.429940 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.429955 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.430368 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.430888 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.431281 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.431226 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.431389 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.431818 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.432355 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.433191 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.433292 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.433393 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.433531 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.433867 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.434094 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.434694 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.434758 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.434740 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.434872 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.435576 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.435487 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.436457 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.436681 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.438260 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.438272 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.438322 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.438461 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.438560 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.438664 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.438857 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.439052 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.439263 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.439367 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.439666 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.439789 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.440328 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.440700 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.440716 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.441373 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.441384 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.441427 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.441510 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.441774 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.441834 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.441896 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.442321 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.443261 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-systemd-units\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.443406 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-slash\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.443504 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-var-lib-openvswitch\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.443620 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-etc-openvswitch\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.443733 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-run-openvswitch\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.443864 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f22e7eb7-5eca-40b1-b7b8-6683604024ba-cnibin\") pod \"multus-additional-cni-plugins-cw98v\" (UID: \"f22e7eb7-5eca-40b1-b7b8-6683604024ba\") " pod="openshift-multus/multus-additional-cni-plugins-cw98v" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.444037 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f22e7eb7-5eca-40b1-b7b8-6683604024ba-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-cw98v\" (UID: \"f22e7eb7-5eca-40b1-b7b8-6683604024ba\") " pod="openshift-multus/multus-additional-cni-plugins-cw98v" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.444166 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.444323 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-kubelet\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.444364 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-run-netns\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.444457 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl2dz\" (UniqueName: \"kubernetes.io/projected/08191894-6514-4c09-aab9-e6c8f0f52354-kube-api-access-zl2dz\") pod \"node-resolver-t2gjf\" (UID: \"08191894-6514-4c09-aab9-e6c8f0f52354\") " pod="openshift-dns/node-resolver-t2gjf" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.444633 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f51aff12-328f-4b79-8dbb-2079510f45dc-metrics-certs\") pod \"network-metrics-daemon-htbq4\" (UID: \"f51aff12-328f-4b79-8dbb-2079510f45dc\") " pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.446101 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.446430 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.446779 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.446797 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzvv4\" (UniqueName: \"kubernetes.io/projected/f22e7eb7-5eca-40b1-b7b8-6683604024ba-kube-api-access-jzvv4\") pod \"multus-additional-cni-plugins-cw98v\" (UID: \"f22e7eb7-5eca-40b1-b7b8-6683604024ba\") " pod="openshift-multus/multus-additional-cni-plugins-cw98v" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.447085 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-hostroot\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.447140 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.447165 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.446963 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.447308 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.447484 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-host-var-lib-cni-multus\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.447614 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d5cc2023-21a7-4205-9492-ec1d1a0d146b-host\") pod \"node-ca-fbp47\" (UID: \"d5cc2023-21a7-4205-9492-ec1d1a0d146b\") " pod="openshift-image-registry/node-ca-fbp47" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.447356 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.447897 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/306aba52-0b6e-4d3f-b05f-757daebc5e24-proxy-tls\") pod \"machine-config-daemon-qd752\" (UID: \"306aba52-0b6e-4d3f-b05f-757daebc5e24\") " pod="openshift-machine-config-operator/machine-config-daemon-qd752" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.448056 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.448121 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f22e7eb7-5eca-40b1-b7b8-6683604024ba-cni-binary-copy\") pod \"multus-additional-cni-plugins-cw98v\" (UID: \"f22e7eb7-5eca-40b1-b7b8-6683604024ba\") " pod="openshift-multus/multus-additional-cni-plugins-cw98v" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.448143 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.448238 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.448285 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.448303 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-run-systemd\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.448771 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.448814 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.448881 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-multus-cni-dir\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.449024 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-multus-conf-dir\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.449086 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-cni-bin\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.449141 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/72f416e6-5647-4b65-b06f-df73aca5e594-ovn-node-metrics-cert\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.449180 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.449207 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/72f416e6-5647-4b65-b06f-df73aca5e594-ovnkube-script-lib\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.449267 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f22e7eb7-5eca-40b1-b7b8-6683604024ba-tuning-conf-dir\") pod \"multus-additional-cni-plugins-cw98v\" (UID: \"f22e7eb7-5eca-40b1-b7b8-6683604024ba\") " pod="openshift-multus/multus-additional-cni-plugins-cw98v" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.449301 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.449410 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-cni-netd\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.449470 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.449536 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.449598 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.449654 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d5cc2023-21a7-4205-9492-ec1d1a0d146b-serviceca\") pod \"node-ca-fbp47\" (UID: \"d5cc2023-21a7-4205-9492-ec1d1a0d146b\") " pod="openshift-image-registry/node-ca-fbp47" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.449709 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0fada5a7-935e-4bd3-931b-082fea67a9ec-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gmrxl\" (UID: \"0fada5a7-935e-4bd3-931b-082fea67a9ec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.449764 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chdwv\" (UniqueName: \"kubernetes.io/projected/0fada5a7-935e-4bd3-931b-082fea67a9ec-kube-api-access-chdwv\") pod \"ovnkube-control-plane-749d76644c-gmrxl\" (UID: \"0fada5a7-935e-4bd3-931b-082fea67a9ec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.451089 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-cnibin\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.451290 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-host-run-multus-certs\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.449536 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.449969 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.450053 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.450362 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.450493 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.450781 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.450887 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.450904 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.451433 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.451593 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.452078 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.452093 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.452149 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.452245 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.452304 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/72f416e6-5647-4b65-b06f-df73aca5e594-env-overrides\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.452361 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwn4x\" (UniqueName: \"kubernetes.io/projected/306aba52-0b6e-4d3f-b05f-757daebc5e24-kube-api-access-nwn4x\") pod \"machine-config-daemon-qd752\" (UID: \"306aba52-0b6e-4d3f-b05f-757daebc5e24\") " pod="openshift-machine-config-operator/machine-config-daemon-qd752" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.452419 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g75t9\" (UniqueName: \"kubernetes.io/projected/f51aff12-328f-4b79-8dbb-2079510f45dc-kube-api-access-g75t9\") pod \"network-metrics-daemon-htbq4\" (UID: \"f51aff12-328f-4b79-8dbb-2079510f45dc\") " pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.452474 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-host-run-k8s-cni-cncf-io\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.452527 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-host-var-lib-cni-bin\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.452579 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g779d\" (UniqueName: \"kubernetes.io/projected/90b654a4-010b-4a5e-b2d8-d42764fcb628-kube-api-access-g779d\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.452589 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.452642 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.452706 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/306aba52-0b6e-4d3f-b05f-757daebc5e24-mcd-auth-proxy-config\") pod \"machine-config-daemon-qd752\" (UID: \"306aba52-0b6e-4d3f-b05f-757daebc5e24\") " pod="openshift-machine-config-operator/machine-config-daemon-qd752" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.452771 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-system-cni-dir\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.452827 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-run-ovn\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.452878 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-node-log\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.452890 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.452971 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-log-socket\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.453033 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg8qv\" (UniqueName: \"kubernetes.io/projected/72f416e6-5647-4b65-b06f-df73aca5e594-kube-api-access-cg8qv\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.453095 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/90b654a4-010b-4a5e-b2d8-d42764fcb628-cni-binary-copy\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.453148 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-host-var-lib-kubelet\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.453176 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.453373 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.453488 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0fada5a7-935e-4bd3-931b-082fea67a9ec-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gmrxl\" (UID: \"0fada5a7-935e-4bd3-931b-082fea67a9ec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.453513 4822 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.453609 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:09:15.953579474 +0000 UTC m=+78.341342062 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.453614 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.454670 4822 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.455327 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.455373 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.455817 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.456007 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.456047 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.456404 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.456987 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.457068 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.457486 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.457552 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.457632 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.457802 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.458089 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.458525 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.458967 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.459443 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.459482 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.459893 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.460043 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.460092 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.460174 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.460520 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.460990 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.461045 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.461077 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.461093 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.461118 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.461134 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:15Z","lastTransitionTime":"2026-02-24T09:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.461574 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.461976 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.462372 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.462693 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.463158 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.463370 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.463647 4822 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.463756 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:09:15.963731015 +0000 UTC m=+78.351493593 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.463982 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.464132 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.464614 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.464634 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.464647 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.465179 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.465692 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.465799 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.466532 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.466581 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.466876 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.467440 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.467773 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.468112 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.468341 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.468703 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.468879 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.469880 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.470057 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.470415 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.470607 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.470874 4822 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.470883 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.470900 4822 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.470955 4822 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.471234 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.471814 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.472015 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.474149 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.474184 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.474169 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.474443 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 09:09:15.974409751 +0000 UTC m=+78.362172389 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.474512 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.474780 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.475144 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.475342 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.475806 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.475962 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.476060 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.476176 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.476460 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.476466 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.476897 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.476963 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.477087 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.477559 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.477610 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.477822 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.477870 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.477949 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.478018 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.478017 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.478056 4822 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.478085 4822 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.478110 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.477107 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.478132 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.478155 4822 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.478201 4822 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.478412 4822 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.478439 4822 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.478461 4822 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.478485 4822 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.478517 4822 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.478544 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.478550 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.478573 4822 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.478649 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.478657 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.478963 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.479223 4822 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.479269 4822 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.479282 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.479299 4822 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.479433 4822 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.479458 4822 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.479482 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.479504 4822 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.479525 4822 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.479547 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.479569 4822 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.479594 4822 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.479616 4822 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.479637 4822 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.479659 4822 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.479680 4822 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.479701 4822 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.479722 4822 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.479745 4822 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.479716 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.479783 4822 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.479724 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.479737 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.479835 4822 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.479850 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.479876 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.479943 4822 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.480054 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.480341 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.480481 4822 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.480525 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.480672 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:09:15.980630408 +0000 UTC m=+78.368393066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.480643 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.480784 4822 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.480849 4822 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.480867 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.480881 4822 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.480899 4822 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.480953 4822 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.480967 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.480979 4822 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.481013 4822 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.481025 4822 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.481727 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.481792 4822 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.481807 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.481821 4822 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.481832 4822 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.481845 4822 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.481861 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.481873 4822 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.481967 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.481985 4822 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.481997 4822 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.482009 4822 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.482024 4822 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.482035 4822 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.482047 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.483051 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.483478 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.484341 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.485838 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.486218 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.487565 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.487651 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.487970 4822 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.488006 4822 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.488027 4822 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.488147 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.488102 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 09:09:15.988077258 +0000 UTC m=+78.375839836 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.488307 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.490159 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.490941 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.491148 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.492639 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.493049 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.494891 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.495223 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.498646 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.499954 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.500734 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.501024 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.501115 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.501218 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.505318 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.515179 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.516980 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.522618 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.528959 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.529451 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.541023 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.551221 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.564338 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.564457 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.564543 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.564623 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.564699 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:15Z","lastTransitionTime":"2026-02-24T09:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.564623 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.582621 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-host-run-k8s-cni-cncf-io\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.582717 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-host-run-k8s-cni-cncf-io\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.582741 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-host-var-lib-cni-bin\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.582897 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g779d\" (UniqueName: \"kubernetes.io/projected/90b654a4-010b-4a5e-b2d8-d42764fcb628-kube-api-access-g779d\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.582993 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwn4x\" (UniqueName: \"kubernetes.io/projected/306aba52-0b6e-4d3f-b05f-757daebc5e24-kube-api-access-nwn4x\") pod \"machine-config-daemon-qd752\" (UID: \"306aba52-0b6e-4d3f-b05f-757daebc5e24\") " pod="openshift-machine-config-operator/machine-config-daemon-qd752" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.583139 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g75t9\" (UniqueName: \"kubernetes.io/projected/f51aff12-328f-4b79-8dbb-2079510f45dc-kube-api-access-g75t9\") pod \"network-metrics-daemon-htbq4\" (UID: \"f51aff12-328f-4b79-8dbb-2079510f45dc\") " pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.584112 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-run-ovn\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.584250 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-node-log\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.584172 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-run-ovn\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.582947 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-host-var-lib-cni-bin\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.584295 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-node-log\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.584393 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-log-socket\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.586110 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-log-socket\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.586275 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg8qv\" (UniqueName: \"kubernetes.io/projected/72f416e6-5647-4b65-b06f-df73aca5e594-kube-api-access-cg8qv\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.586307 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/306aba52-0b6e-4d3f-b05f-757daebc5e24-mcd-auth-proxy-config\") pod \"machine-config-daemon-qd752\" (UID: \"306aba52-0b6e-4d3f-b05f-757daebc5e24\") " pod="openshift-machine-config-operator/machine-config-daemon-qd752" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.586357 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-system-cni-dir\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.586386 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/90b654a4-010b-4a5e-b2d8-d42764fcb628-cni-binary-copy\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.586442 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-host-var-lib-kubelet\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.586466 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0fada5a7-935e-4bd3-931b-082fea67a9ec-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gmrxl\" (UID: \"0fada5a7-935e-4bd3-931b-082fea67a9ec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.586514 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/306aba52-0b6e-4d3f-b05f-757daebc5e24-rootfs\") pod \"machine-config-daemon-qd752\" (UID: \"306aba52-0b6e-4d3f-b05f-757daebc5e24\") " pod="openshift-machine-config-operator/machine-config-daemon-qd752" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.586554 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qvql\" (UniqueName: \"kubernetes.io/projected/d5cc2023-21a7-4205-9492-ec1d1a0d146b-kube-api-access-9qvql\") pod \"node-ca-fbp47\" (UID: \"d5cc2023-21a7-4205-9492-ec1d1a0d146b\") " pod="openshift-image-registry/node-ca-fbp47" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.586601 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/90b654a4-010b-4a5e-b2d8-d42764fcb628-multus-daemon-config\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.586627 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-run-ovn-kubernetes\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.586682 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-host-run-netns\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.586707 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-etc-kubernetes\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.586730 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0fada5a7-935e-4bd3-931b-082fea67a9ec-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gmrxl\" (UID: \"0fada5a7-935e-4bd3-931b-082fea67a9ec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.586774 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/72f416e6-5647-4b65-b06f-df73aca5e594-ovnkube-config\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.586800 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-multus-socket-dir-parent\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.586844 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f22e7eb7-5eca-40b1-b7b8-6683604024ba-os-release\") pod \"multus-additional-cni-plugins-cw98v\" (UID: \"f22e7eb7-5eca-40b1-b7b8-6683604024ba\") " pod="openshift-multus/multus-additional-cni-plugins-cw98v" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.586974 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-os-release\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.586999 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-systemd-units\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.587042 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-slash\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.587064 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/08191894-6514-4c09-aab9-e6c8f0f52354-hosts-file\") pod \"node-resolver-t2gjf\" (UID: \"08191894-6514-4c09-aab9-e6c8f0f52354\") " pod="openshift-dns/node-resolver-t2gjf" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.587108 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f22e7eb7-5eca-40b1-b7b8-6683604024ba-system-cni-dir\") pod \"multus-additional-cni-plugins-cw98v\" (UID: \"f22e7eb7-5eca-40b1-b7b8-6683604024ba\") " pod="openshift-multus/multus-additional-cni-plugins-cw98v" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.587129 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-run-openvswitch\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.587149 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-var-lib-openvswitch\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.587195 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-etc-openvswitch\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.587218 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f22e7eb7-5eca-40b1-b7b8-6683604024ba-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-cw98v\" (UID: \"f22e7eb7-5eca-40b1-b7b8-6683604024ba\") " pod="openshift-multus/multus-additional-cni-plugins-cw98v" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.587244 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f22e7eb7-5eca-40b1-b7b8-6683604024ba-cnibin\") pod \"multus-additional-cni-plugins-cw98v\" (UID: \"f22e7eb7-5eca-40b1-b7b8-6683604024ba\") " pod="openshift-multus/multus-additional-cni-plugins-cw98v" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.587285 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-kubelet\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.587305 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-run-netns\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.587367 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzvv4\" (UniqueName: \"kubernetes.io/projected/f22e7eb7-5eca-40b1-b7b8-6683604024ba-kube-api-access-jzvv4\") pod \"multus-additional-cni-plugins-cw98v\" (UID: \"f22e7eb7-5eca-40b1-b7b8-6683604024ba\") " pod="openshift-multus/multus-additional-cni-plugins-cw98v" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.587393 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-hostroot\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.587467 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl2dz\" (UniqueName: \"kubernetes.io/projected/08191894-6514-4c09-aab9-e6c8f0f52354-kube-api-access-zl2dz\") pod \"node-resolver-t2gjf\" (UID: \"08191894-6514-4c09-aab9-e6c8f0f52354\") " pod="openshift-dns/node-resolver-t2gjf" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.587491 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f51aff12-328f-4b79-8dbb-2079510f45dc-metrics-certs\") pod \"network-metrics-daemon-htbq4\" (UID: \"f51aff12-328f-4b79-8dbb-2079510f45dc\") " pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.587511 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-host-var-lib-cni-multus\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.587577 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d5cc2023-21a7-4205-9492-ec1d1a0d146b-host\") pod \"node-ca-fbp47\" (UID: \"d5cc2023-21a7-4205-9492-ec1d1a0d146b\") " pod="openshift-image-registry/node-ca-fbp47" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.587608 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/306aba52-0b6e-4d3f-b05f-757daebc5e24-proxy-tls\") pod \"machine-config-daemon-qd752\" (UID: \"306aba52-0b6e-4d3f-b05f-757daebc5e24\") " pod="openshift-machine-config-operator/machine-config-daemon-qd752" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.587677 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f22e7eb7-5eca-40b1-b7b8-6683604024ba-cni-binary-copy\") pod \"multus-additional-cni-plugins-cw98v\" (UID: \"f22e7eb7-5eca-40b1-b7b8-6683604024ba\") " pod="openshift-multus/multus-additional-cni-plugins-cw98v" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.587712 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.587741 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-run-systemd\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.587802 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-multus-cni-dir\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.587837 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-multus-conf-dir\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.587863 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-cni-bin\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.587892 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/72f416e6-5647-4b65-b06f-df73aca5e594-ovn-node-metrics-cert\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.587949 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/72f416e6-5647-4b65-b06f-df73aca5e594-ovnkube-script-lib\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.587981 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-cni-netd\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588041 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f22e7eb7-5eca-40b1-b7b8-6683604024ba-tuning-conf-dir\") pod \"multus-additional-cni-plugins-cw98v\" (UID: \"f22e7eb7-5eca-40b1-b7b8-6683604024ba\") " pod="openshift-multus/multus-additional-cni-plugins-cw98v" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588071 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d5cc2023-21a7-4205-9492-ec1d1a0d146b-serviceca\") pod \"node-ca-fbp47\" (UID: \"d5cc2023-21a7-4205-9492-ec1d1a0d146b\") " pod="openshift-image-registry/node-ca-fbp47" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588099 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0fada5a7-935e-4bd3-931b-082fea67a9ec-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gmrxl\" (UID: \"0fada5a7-935e-4bd3-931b-082fea67a9ec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588128 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chdwv\" (UniqueName: \"kubernetes.io/projected/0fada5a7-935e-4bd3-931b-082fea67a9ec-kube-api-access-chdwv\") pod \"ovnkube-control-plane-749d76644c-gmrxl\" (UID: \"0fada5a7-935e-4bd3-931b-082fea67a9ec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588159 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588185 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-host-run-multus-certs\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588213 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588257 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/72f416e6-5647-4b65-b06f-df73aca5e594-env-overrides\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588283 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-cnibin\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588450 4822 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588477 4822 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588492 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588508 4822 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588525 4822 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588551 4822 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588569 4822 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588586 4822 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588600 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588613 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588625 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588637 4822 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588649 4822 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588661 4822 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588673 4822 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588684 4822 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588697 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588709 4822 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588724 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588738 4822 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588749 4822 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588761 4822 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588774 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588785 4822 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588798 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588811 4822 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588823 4822 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588835 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588847 4822 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588859 4822 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588871 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588883 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588903 4822 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588937 4822 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588951 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588965 4822 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588977 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.588989 4822 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589001 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589013 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589025 4822 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589037 4822 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589049 4822 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589062 4822 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589074 4822 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589086 4822 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589097 4822 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589110 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589122 4822 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589155 4822 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589173 4822 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589188 4822 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589201 4822 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589222 4822 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589235 4822 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589246 4822 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589258 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589270 4822 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589282 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589293 4822 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589306 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589317 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589329 4822 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589342 4822 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589354 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589367 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589379 4822 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589394 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589411 4822 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589428 4822 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589444 4822 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589459 4822 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589474 4822 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589533 4822 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589551 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589569 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589587 4822 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589604 4822 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589619 4822 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589635 4822 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589650 4822 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589667 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589687 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589703 4822 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589962 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589977 4822 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.589990 4822 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590004 4822 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590016 4822 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590056 4822 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590071 4822 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590083 4822 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590094 4822 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590106 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590118 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590130 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590141 4822 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590153 4822 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590165 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590177 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590193 4822 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590205 4822 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590217 4822 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590229 4822 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590242 4822 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590253 4822 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590264 4822 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590276 4822 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590288 4822 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590300 4822 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590312 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590324 4822 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590336 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590348 4822 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590365 4822 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590377 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590390 4822 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590406 4822 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590423 4822 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590438 4822 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590451 4822 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590463 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590476 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590514 4822 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590525 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590537 4822 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590549 4822 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590564 4822 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590576 4822 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590588 4822 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590601 4822 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.590687 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-cnibin\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.593739 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/306aba52-0b6e-4d3f-b05f-757daebc5e24-mcd-auth-proxy-config\") pod \"machine-config-daemon-qd752\" (UID: \"306aba52-0b6e-4d3f-b05f-757daebc5e24\") " pod="openshift-machine-config-operator/machine-config-daemon-qd752" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.593955 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-system-cni-dir\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.595011 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/90b654a4-010b-4a5e-b2d8-d42764fcb628-cni-binary-copy\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.595093 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-host-var-lib-kubelet\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.595157 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-hostroot\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.595303 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-var-lib-openvswitch\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.595350 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-os-release\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.595410 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-systemd-units\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.595526 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-slash\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.595712 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/08191894-6514-4c09-aab9-e6c8f0f52354-hosts-file\") pod \"node-resolver-t2gjf\" (UID: \"08191894-6514-4c09-aab9-e6c8f0f52354\") " pod="openshift-dns/node-resolver-t2gjf" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.595781 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f22e7eb7-5eca-40b1-b7b8-6683604024ba-system-cni-dir\") pod \"multus-additional-cni-plugins-cw98v\" (UID: \"f22e7eb7-5eca-40b1-b7b8-6683604024ba\") " pod="openshift-multus/multus-additional-cni-plugins-cw98v" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.595842 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-run-openvswitch\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.595907 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-host-run-netns\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.596013 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/306aba52-0b6e-4d3f-b05f-757daebc5e24-rootfs\") pod \"machine-config-daemon-qd752\" (UID: \"306aba52-0b6e-4d3f-b05f-757daebc5e24\") " pod="openshift-machine-config-operator/machine-config-daemon-qd752" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.596420 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-run-ovn-kubernetes\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.596584 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f22e7eb7-5eca-40b1-b7b8-6683604024ba-cnibin\") pod \"multus-additional-cni-plugins-cw98v\" (UID: \"f22e7eb7-5eca-40b1-b7b8-6683604024ba\") " pod="openshift-multus/multus-additional-cni-plugins-cw98v" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.596641 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-etc-openvswitch\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.596862 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/72f416e6-5647-4b65-b06f-df73aca5e594-ovnkube-script-lib\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.597070 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-cni-netd\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.597182 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.597263 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-host-run-multus-certs\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.597339 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.597745 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/90b654a4-010b-4a5e-b2d8-d42764fcb628-multus-daemon-config\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.597871 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f22e7eb7-5eca-40b1-b7b8-6683604024ba-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-cw98v\" (UID: \"f22e7eb7-5eca-40b1-b7b8-6683604024ba\") " pod="openshift-multus/multus-additional-cni-plugins-cw98v" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.598015 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f22e7eb7-5eca-40b1-b7b8-6683604024ba-tuning-conf-dir\") pod \"multus-additional-cni-plugins-cw98v\" (UID: \"f22e7eb7-5eca-40b1-b7b8-6683604024ba\") " pod="openshift-multus/multus-additional-cni-plugins-cw98v" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.598045 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f22e7eb7-5eca-40b1-b7b8-6683604024ba-cni-binary-copy\") pod \"multus-additional-cni-plugins-cw98v\" (UID: \"f22e7eb7-5eca-40b1-b7b8-6683604024ba\") " pod="openshift-multus/multus-additional-cni-plugins-cw98v" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.598062 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-multus-cni-dir\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.598143 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.598167 4822 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.598212 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-run-systemd\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.598240 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f51aff12-328f-4b79-8dbb-2079510f45dc-metrics-certs podName:f51aff12-328f-4b79-8dbb-2079510f45dc nodeName:}" failed. No retries permitted until 2026-02-24 09:09:16.098212878 +0000 UTC m=+78.485975466 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f51aff12-328f-4b79-8dbb-2079510f45dc-metrics-certs") pod "network-metrics-daemon-htbq4" (UID: "f51aff12-328f-4b79-8dbb-2079510f45dc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.598283 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-multus-conf-dir\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.598317 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/72f416e6-5647-4b65-b06f-df73aca5e594-env-overrides\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.598426 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-cni-bin\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.598524 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-multus-socket-dir-parent\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.598628 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f22e7eb7-5eca-40b1-b7b8-6683604024ba-os-release\") pod \"multus-additional-cni-plugins-cw98v\" (UID: \"f22e7eb7-5eca-40b1-b7b8-6683604024ba\") " pod="openshift-multus/multus-additional-cni-plugins-cw98v" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.598667 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-host-var-lib-cni-multus\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.598722 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d5cc2023-21a7-4205-9492-ec1d1a0d146b-host\") pod \"node-ca-fbp47\" (UID: \"d5cc2023-21a7-4205-9492-ec1d1a0d146b\") " pod="openshift-image-registry/node-ca-fbp47" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.599367 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-run-netns\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.599453 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/90b654a4-010b-4a5e-b2d8-d42764fcb628-etc-kubernetes\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.600134 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d5cc2023-21a7-4205-9492-ec1d1a0d146b-serviceca\") pod \"node-ca-fbp47\" (UID: \"d5cc2023-21a7-4205-9492-ec1d1a0d146b\") " pod="openshift-image-registry/node-ca-fbp47" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.600272 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-kubelet\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.600901 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/72f416e6-5647-4b65-b06f-df73aca5e594-ovn-node-metrics-cert\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.601459 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0fada5a7-935e-4bd3-931b-082fea67a9ec-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gmrxl\" (UID: \"0fada5a7-935e-4bd3-931b-082fea67a9ec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.601665 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/72f416e6-5647-4b65-b06f-df73aca5e594-ovnkube-config\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.602437 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/306aba52-0b6e-4d3f-b05f-757daebc5e24-proxy-tls\") pod \"machine-config-daemon-qd752\" (UID: \"306aba52-0b6e-4d3f-b05f-757daebc5e24\") " pod="openshift-machine-config-operator/machine-config-daemon-qd752" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.603282 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0fada5a7-935e-4bd3-931b-082fea67a9ec-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gmrxl\" (UID: \"0fada5a7-935e-4bd3-931b-082fea67a9ec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.603403 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0fada5a7-935e-4bd3-931b-082fea67a9ec-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gmrxl\" (UID: \"0fada5a7-935e-4bd3-931b-082fea67a9ec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.604709 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.614004 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.614578 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwn4x\" (UniqueName: \"kubernetes.io/projected/306aba52-0b6e-4d3f-b05f-757daebc5e24-kube-api-access-nwn4x\") pod \"machine-config-daemon-qd752\" (UID: \"306aba52-0b6e-4d3f-b05f-757daebc5e24\") " pod="openshift-machine-config-operator/machine-config-daemon-qd752" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.615591 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg8qv\" (UniqueName: \"kubernetes.io/projected/72f416e6-5647-4b65-b06f-df73aca5e594-kube-api-access-cg8qv\") pod \"ovnkube-node-669bp\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.616610 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g75t9\" (UniqueName: \"kubernetes.io/projected/f51aff12-328f-4b79-8dbb-2079510f45dc-kube-api-access-g75t9\") pod \"network-metrics-daemon-htbq4\" (UID: \"f51aff12-328f-4b79-8dbb-2079510f45dc\") " pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.619394 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g779d\" (UniqueName: \"kubernetes.io/projected/90b654a4-010b-4a5e-b2d8-d42764fcb628-kube-api-access-g779d\") pod \"multus-lqrzq\" (UID: \"90b654a4-010b-4a5e-b2d8-d42764fcb628\") " pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.622788 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl2dz\" (UniqueName: \"kubernetes.io/projected/08191894-6514-4c09-aab9-e6c8f0f52354-kube-api-access-zl2dz\") pod \"node-resolver-t2gjf\" (UID: \"08191894-6514-4c09-aab9-e6c8f0f52354\") " pod="openshift-dns/node-resolver-t2gjf" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.625452 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.631876 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzvv4\" (UniqueName: \"kubernetes.io/projected/f22e7eb7-5eca-40b1-b7b8-6683604024ba-kube-api-access-jzvv4\") pod \"multus-additional-cni-plugins-cw98v\" (UID: \"f22e7eb7-5eca-40b1-b7b8-6683604024ba\") " pod="openshift-multus/multus-additional-cni-plugins-cw98v" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.632431 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qvql\" (UniqueName: \"kubernetes.io/projected/d5cc2023-21a7-4205-9492-ec1d1a0d146b-kube-api-access-9qvql\") pod \"node-ca-fbp47\" (UID: \"d5cc2023-21a7-4205-9492-ec1d1a0d146b\") " pod="openshift-image-registry/node-ca-fbp47" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.633431 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chdwv\" (UniqueName: \"kubernetes.io/projected/0fada5a7-935e-4bd3-931b-082fea67a9ec-kube-api-access-chdwv\") pod \"ovnkube-control-plane-749d76644c-gmrxl\" (UID: \"0fada5a7-935e-4bd3-931b-082fea67a9ec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.634088 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.640459 4822 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:09:15 crc kubenswrapper[4822]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 24 09:09:15 crc kubenswrapper[4822]: set -o allexport Feb 24 09:09:15 crc kubenswrapper[4822]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 24 09:09:15 crc kubenswrapper[4822]: source /etc/kubernetes/apiserver-url.env Feb 24 09:09:15 crc kubenswrapper[4822]: else Feb 24 09:09:15 crc kubenswrapper[4822]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 24 09:09:15 crc kubenswrapper[4822]: exit 1 Feb 24 09:09:15 crc kubenswrapper[4822]: fi Feb 24 09:09:15 crc kubenswrapper[4822]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 24 09:09:15 crc kubenswrapper[4822]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:09:15 crc kubenswrapper[4822]: > logger="UnhandledError" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.640794 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.642551 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.651960 4822 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:09:15 crc kubenswrapper[4822]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 24 09:09:15 crc kubenswrapper[4822]: if [[ -f "/env/_master" ]]; then Feb 24 09:09:15 crc kubenswrapper[4822]: set -o allexport Feb 24 09:09:15 crc kubenswrapper[4822]: source "/env/_master" Feb 24 09:09:15 crc kubenswrapper[4822]: set +o allexport Feb 24 09:09:15 crc kubenswrapper[4822]: fi Feb 24 09:09:15 crc kubenswrapper[4822]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 24 09:09:15 crc kubenswrapper[4822]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 24 09:09:15 crc kubenswrapper[4822]: ho_enable="--enable-hybrid-overlay" Feb 24 09:09:15 crc kubenswrapper[4822]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 24 09:09:15 crc kubenswrapper[4822]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 24 09:09:15 crc kubenswrapper[4822]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 24 09:09:15 crc kubenswrapper[4822]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 24 09:09:15 crc kubenswrapper[4822]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 24 09:09:15 crc kubenswrapper[4822]: --webhook-host=127.0.0.1 \ Feb 24 09:09:15 crc kubenswrapper[4822]: --webhook-port=9743 \ Feb 24 09:09:15 crc kubenswrapper[4822]: ${ho_enable} \ Feb 24 09:09:15 crc kubenswrapper[4822]: --enable-interconnect \ Feb 24 09:09:15 crc kubenswrapper[4822]: --disable-approver \ Feb 24 09:09:15 crc kubenswrapper[4822]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 24 09:09:15 crc kubenswrapper[4822]: --wait-for-kubernetes-api=200s \ Feb 24 09:09:15 crc kubenswrapper[4822]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 24 09:09:15 crc kubenswrapper[4822]: --loglevel="${LOGLEVEL}" Feb 24 09:09:15 crc kubenswrapper[4822]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:09:15 crc kubenswrapper[4822]: > logger="UnhandledError" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.652808 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-fbp47" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.654140 4822 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:09:15 crc kubenswrapper[4822]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 24 09:09:15 crc kubenswrapper[4822]: if [[ -f "/env/_master" ]]; then Feb 24 09:09:15 crc kubenswrapper[4822]: set -o allexport Feb 24 09:09:15 crc kubenswrapper[4822]: source "/env/_master" Feb 24 09:09:15 crc kubenswrapper[4822]: set +o allexport Feb 24 09:09:15 crc kubenswrapper[4822]: fi Feb 24 09:09:15 crc kubenswrapper[4822]: Feb 24 09:09:15 crc kubenswrapper[4822]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 24 09:09:15 crc kubenswrapper[4822]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 24 09:09:15 crc kubenswrapper[4822]: --disable-webhook \ Feb 24 09:09:15 crc kubenswrapper[4822]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 24 09:09:15 crc kubenswrapper[4822]: --loglevel="${LOGLEVEL}" Feb 24 09:09:15 crc kubenswrapper[4822]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:09:15 crc kubenswrapper[4822]: > logger="UnhandledError" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.655344 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 24 09:09:15 crc kubenswrapper[4822]: W0224 09:09:15.657776 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-1653c8ceea11310c86113f1b82d157ddc95a5866b6c24f8384ad4457c5427301 WatchSource:0}: Error finding container 1653c8ceea11310c86113f1b82d157ddc95a5866b6c24f8384ad4457c5427301: Status 404 returned error can't find the container with id 1653c8ceea11310c86113f1b82d157ddc95a5866b6c24f8384ad4457c5427301 Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.661094 4822 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.662370 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.662441 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" Feb 24 09:09:15 crc kubenswrapper[4822]: W0224 09:09:15.663289 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5cc2023_21a7_4205_9492_ec1d1a0d146b.slice/crio-49a9ca8e0ab42eef8508482b442e370f0ad1388209e3c16db39712b71ce4e84b WatchSource:0}: Error finding container 49a9ca8e0ab42eef8508482b442e370f0ad1388209e3c16db39712b71ce4e84b: Status 404 returned error can't find the container with id 49a9ca8e0ab42eef8508482b442e370f0ad1388209e3c16db39712b71ce4e84b Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.667118 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.667216 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.667300 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.667375 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.667448 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:15Z","lastTransitionTime":"2026-02-24T09:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.668319 4822 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:09:15 crc kubenswrapper[4822]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Feb 24 09:09:15 crc kubenswrapper[4822]: while [ true ]; Feb 24 09:09:15 crc kubenswrapper[4822]: do Feb 24 09:09:15 crc kubenswrapper[4822]: for f in $(ls /tmp/serviceca); do Feb 24 09:09:15 crc kubenswrapper[4822]: echo $f Feb 24 09:09:15 crc kubenswrapper[4822]: ca_file_path="/tmp/serviceca/${f}" Feb 24 09:09:15 crc kubenswrapper[4822]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Feb 24 09:09:15 crc kubenswrapper[4822]: reg_dir_path="/etc/docker/certs.d/${f}" Feb 24 09:09:15 crc kubenswrapper[4822]: if [ -e "${reg_dir_path}" ]; then Feb 24 09:09:15 crc kubenswrapper[4822]: cp -u $ca_file_path $reg_dir_path/ca.crt Feb 24 09:09:15 crc kubenswrapper[4822]: else Feb 24 09:09:15 crc kubenswrapper[4822]: mkdir $reg_dir_path Feb 24 09:09:15 crc kubenswrapper[4822]: cp $ca_file_path $reg_dir_path/ca.crt Feb 24 09:09:15 crc kubenswrapper[4822]: fi Feb 24 09:09:15 crc kubenswrapper[4822]: done Feb 24 09:09:15 crc kubenswrapper[4822]: for d in $(ls /etc/docker/certs.d); do Feb 24 09:09:15 crc kubenswrapper[4822]: echo $d Feb 24 09:09:15 crc kubenswrapper[4822]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Feb 24 09:09:15 crc kubenswrapper[4822]: reg_conf_path="/tmp/serviceca/${dp}" Feb 24 09:09:15 crc kubenswrapper[4822]: if [ ! -e "${reg_conf_path}" ]; then Feb 24 09:09:15 crc kubenswrapper[4822]: rm -rf /etc/docker/certs.d/$d Feb 24 09:09:15 crc kubenswrapper[4822]: fi Feb 24 09:09:15 crc kubenswrapper[4822]: done Feb 24 09:09:15 crc kubenswrapper[4822]: sleep 60 & wait ${!} Feb 24 09:09:15 crc kubenswrapper[4822]: done Feb 24 09:09:15 crc kubenswrapper[4822]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9qvql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-fbp47_openshift-image-registry(d5cc2023-21a7-4205-9492-ec1d1a0d146b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:09:15 crc kubenswrapper[4822]: > logger="UnhandledError" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.670088 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-fbp47" podUID="d5cc2023-21a7-4205-9492-ec1d1a0d146b" Feb 24 09:09:15 crc kubenswrapper[4822]: W0224 09:09:15.674347 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fada5a7_935e_4bd3_931b_082fea67a9ec.slice/crio-01dca5ca4c4398727e8e0080b9ffc47a26d4c05b2049bc49ac73c1996e522857 WatchSource:0}: Error finding container 01dca5ca4c4398727e8e0080b9ffc47a26d4c05b2049bc49ac73c1996e522857: Status 404 returned error can't find the container with id 01dca5ca4c4398727e8e0080b9ffc47a26d4c05b2049bc49ac73c1996e522857 Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.675493 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qd752" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.676131 4822 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:09:15 crc kubenswrapper[4822]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Feb 24 09:09:15 crc kubenswrapper[4822]: set -euo pipefail Feb 24 09:09:15 crc kubenswrapper[4822]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Feb 24 09:09:15 crc kubenswrapper[4822]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Feb 24 09:09:15 crc kubenswrapper[4822]: # As the secret mount is optional we must wait for the files to be present. Feb 24 09:09:15 crc kubenswrapper[4822]: # The service is created in monitor.yaml and this is created in sdn.yaml. Feb 24 09:09:15 crc kubenswrapper[4822]: TS=$(date +%s) Feb 24 09:09:15 crc kubenswrapper[4822]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Feb 24 09:09:15 crc kubenswrapper[4822]: HAS_LOGGED_INFO=0 Feb 24 09:09:15 crc kubenswrapper[4822]: Feb 24 09:09:15 crc kubenswrapper[4822]: log_missing_certs(){ Feb 24 09:09:15 crc kubenswrapper[4822]: CUR_TS=$(date +%s) Feb 24 09:09:15 crc kubenswrapper[4822]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Feb 24 09:09:15 crc kubenswrapper[4822]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Feb 24 09:09:15 crc kubenswrapper[4822]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Feb 24 09:09:15 crc kubenswrapper[4822]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Feb 24 09:09:15 crc kubenswrapper[4822]: HAS_LOGGED_INFO=1 Feb 24 09:09:15 crc kubenswrapper[4822]: fi Feb 24 09:09:15 crc kubenswrapper[4822]: } Feb 24 09:09:15 crc kubenswrapper[4822]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Feb 24 09:09:15 crc kubenswrapper[4822]: log_missing_certs Feb 24 09:09:15 crc kubenswrapper[4822]: sleep 5 Feb 24 09:09:15 crc kubenswrapper[4822]: done Feb 24 09:09:15 crc kubenswrapper[4822]: Feb 24 09:09:15 crc kubenswrapper[4822]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Feb 24 09:09:15 crc kubenswrapper[4822]: exec /usr/bin/kube-rbac-proxy \ Feb 24 09:09:15 crc kubenswrapper[4822]: --logtostderr \ Feb 24 09:09:15 crc kubenswrapper[4822]: --secure-listen-address=:9108 \ Feb 24 09:09:15 crc kubenswrapper[4822]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Feb 24 09:09:15 crc kubenswrapper[4822]: --upstream=http://127.0.0.1:29108/ \ Feb 24 09:09:15 crc kubenswrapper[4822]: --tls-private-key-file=${TLS_PK} \ Feb 24 09:09:15 crc kubenswrapper[4822]: --tls-cert-file=${TLS_CERT} Feb 24 09:09:15 crc kubenswrapper[4822]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-chdwv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-gmrxl_openshift-ovn-kubernetes(0fada5a7-935e-4bd3-931b-082fea67a9ec): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:09:15 crc kubenswrapper[4822]: > logger="UnhandledError" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.681472 4822 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:09:15 crc kubenswrapper[4822]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 24 09:09:15 crc kubenswrapper[4822]: if [[ -f "/env/_master" ]]; then Feb 24 09:09:15 crc kubenswrapper[4822]: set -o allexport Feb 24 09:09:15 crc kubenswrapper[4822]: source "/env/_master" Feb 24 09:09:15 crc kubenswrapper[4822]: set +o allexport Feb 24 09:09:15 crc kubenswrapper[4822]: fi Feb 24 09:09:15 crc kubenswrapper[4822]: Feb 24 09:09:15 crc kubenswrapper[4822]: ovn_v4_join_subnet_opt= Feb 24 09:09:15 crc kubenswrapper[4822]: if [[ "" != "" ]]; then Feb 24 09:09:15 crc kubenswrapper[4822]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Feb 24 09:09:15 crc kubenswrapper[4822]: fi Feb 24 09:09:15 crc kubenswrapper[4822]: ovn_v6_join_subnet_opt= Feb 24 09:09:15 crc kubenswrapper[4822]: if [[ "" != "" ]]; then Feb 24 09:09:15 crc kubenswrapper[4822]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Feb 24 09:09:15 crc kubenswrapper[4822]: fi Feb 24 09:09:15 crc kubenswrapper[4822]: Feb 24 09:09:15 crc kubenswrapper[4822]: ovn_v4_transit_switch_subnet_opt= Feb 24 09:09:15 crc kubenswrapper[4822]: if [[ "" != "" ]]; then Feb 24 09:09:15 crc kubenswrapper[4822]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Feb 24 09:09:15 crc kubenswrapper[4822]: fi Feb 24 09:09:15 crc kubenswrapper[4822]: ovn_v6_transit_switch_subnet_opt= Feb 24 09:09:15 crc kubenswrapper[4822]: if [[ "" != "" ]]; then Feb 24 09:09:15 crc kubenswrapper[4822]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Feb 24 09:09:15 crc kubenswrapper[4822]: fi Feb 24 09:09:15 crc kubenswrapper[4822]: Feb 24 09:09:15 crc kubenswrapper[4822]: dns_name_resolver_enabled_flag= Feb 24 09:09:15 crc kubenswrapper[4822]: if [[ "false" == "true" ]]; then Feb 24 09:09:15 crc kubenswrapper[4822]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Feb 24 09:09:15 crc kubenswrapper[4822]: fi Feb 24 09:09:15 crc kubenswrapper[4822]: Feb 24 09:09:15 crc kubenswrapper[4822]: persistent_ips_enabled_flag= Feb 24 09:09:15 crc kubenswrapper[4822]: if [[ "true" == "true" ]]; then Feb 24 09:09:15 crc kubenswrapper[4822]: persistent_ips_enabled_flag="--enable-persistent-ips" Feb 24 09:09:15 crc kubenswrapper[4822]: fi Feb 24 09:09:15 crc kubenswrapper[4822]: Feb 24 09:09:15 crc kubenswrapper[4822]: # This is needed so that converting clusters from GA to TP Feb 24 09:09:15 crc kubenswrapper[4822]: # will rollout control plane pods as well Feb 24 09:09:15 crc kubenswrapper[4822]: network_segmentation_enabled_flag= Feb 24 09:09:15 crc kubenswrapper[4822]: multi_network_enabled_flag= Feb 24 09:09:15 crc kubenswrapper[4822]: if [[ "true" == "true" ]]; then Feb 24 09:09:15 crc kubenswrapper[4822]: multi_network_enabled_flag="--enable-multi-network" Feb 24 09:09:15 crc kubenswrapper[4822]: network_segmentation_enabled_flag="--enable-network-segmentation" Feb 24 09:09:15 crc kubenswrapper[4822]: fi Feb 24 09:09:15 crc kubenswrapper[4822]: Feb 24 09:09:15 crc kubenswrapper[4822]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Feb 24 09:09:15 crc kubenswrapper[4822]: exec /usr/bin/ovnkube \ Feb 24 09:09:15 crc kubenswrapper[4822]: --enable-interconnect \ Feb 24 09:09:15 crc kubenswrapper[4822]: --init-cluster-manager "${K8S_NODE}" \ Feb 24 09:09:15 crc kubenswrapper[4822]: --config-file=/run/ovnkube-config/ovnkube.conf \ Feb 24 09:09:15 crc kubenswrapper[4822]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Feb 24 09:09:15 crc kubenswrapper[4822]: --metrics-bind-address "127.0.0.1:29108" \ Feb 24 09:09:15 crc kubenswrapper[4822]: --metrics-enable-pprof \ Feb 24 09:09:15 crc kubenswrapper[4822]: --metrics-enable-config-duration \ Feb 24 09:09:15 crc kubenswrapper[4822]: ${ovn_v4_join_subnet_opt} \ Feb 24 09:09:15 crc kubenswrapper[4822]: ${ovn_v6_join_subnet_opt} \ Feb 24 09:09:15 crc kubenswrapper[4822]: ${ovn_v4_transit_switch_subnet_opt} \ Feb 24 09:09:15 crc kubenswrapper[4822]: ${ovn_v6_transit_switch_subnet_opt} \ Feb 24 09:09:15 crc kubenswrapper[4822]: ${dns_name_resolver_enabled_flag} \ Feb 24 09:09:15 crc kubenswrapper[4822]: ${persistent_ips_enabled_flag} \ Feb 24 09:09:15 crc kubenswrapper[4822]: ${multi_network_enabled_flag} \ Feb 24 09:09:15 crc kubenswrapper[4822]: ${network_segmentation_enabled_flag} Feb 24 09:09:15 crc kubenswrapper[4822]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-chdwv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-gmrxl_openshift-ovn-kubernetes(0fada5a7-935e-4bd3-931b-082fea67a9ec): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:09:15 crc kubenswrapper[4822]: > logger="UnhandledError" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.683523 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" podUID="0fada5a7-935e-4bd3-931b-082fea67a9ec" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.683759 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.693420 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-t2gjf" Feb 24 09:09:15 crc kubenswrapper[4822]: W0224 09:09:15.694285 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod306aba52_0b6e_4d3f_b05f_757daebc5e24.slice/crio-46a2e39bee1d5719a6291536d3764add233baa740cd725a8fddfe013abc5db62 WatchSource:0}: Error finding container 46a2e39bee1d5719a6291536d3764add233baa740cd725a8fddfe013abc5db62: Status 404 returned error can't find the container with id 46a2e39bee1d5719a6291536d3764add233baa740cd725a8fddfe013abc5db62 Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.696877 4822 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nwn4x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.701393 4822 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nwn4x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.702709 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.704827 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lqrzq" Feb 24 09:09:15 crc kubenswrapper[4822]: W0224 09:09:15.707210 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72f416e6_5647_4b65_b06f_df73aca5e594.slice/crio-2559c6d343beb3d63a0ce41b808722d1a90cd58cb64711e91fb36365c5d898b9 WatchSource:0}: Error finding container 2559c6d343beb3d63a0ce41b808722d1a90cd58cb64711e91fb36365c5d898b9: Status 404 returned error can't find the container with id 2559c6d343beb3d63a0ce41b808722d1a90cd58cb64711e91fb36365c5d898b9 Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.709454 4822 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:09:15 crc kubenswrapper[4822]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Feb 24 09:09:15 crc kubenswrapper[4822]: apiVersion: v1 Feb 24 09:09:15 crc kubenswrapper[4822]: clusters: Feb 24 09:09:15 crc kubenswrapper[4822]: - cluster: Feb 24 09:09:15 crc kubenswrapper[4822]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Feb 24 09:09:15 crc kubenswrapper[4822]: server: https://api-int.crc.testing:6443 Feb 24 09:09:15 crc kubenswrapper[4822]: name: default-cluster Feb 24 09:09:15 crc kubenswrapper[4822]: contexts: Feb 24 09:09:15 crc kubenswrapper[4822]: - context: Feb 24 09:09:15 crc kubenswrapper[4822]: cluster: default-cluster Feb 24 09:09:15 crc kubenswrapper[4822]: namespace: default Feb 24 09:09:15 crc kubenswrapper[4822]: user: default-auth Feb 24 09:09:15 crc kubenswrapper[4822]: name: default-context Feb 24 09:09:15 crc kubenswrapper[4822]: current-context: default-context Feb 24 09:09:15 crc kubenswrapper[4822]: kind: Config Feb 24 09:09:15 crc kubenswrapper[4822]: preferences: {} Feb 24 09:09:15 crc kubenswrapper[4822]: users: Feb 24 09:09:15 crc kubenswrapper[4822]: - name: default-auth Feb 24 09:09:15 crc kubenswrapper[4822]: user: Feb 24 09:09:15 crc kubenswrapper[4822]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 24 09:09:15 crc kubenswrapper[4822]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 24 09:09:15 crc kubenswrapper[4822]: EOF Feb 24 09:09:15 crc kubenswrapper[4822]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cg8qv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-669bp_openshift-ovn-kubernetes(72f416e6-5647-4b65-b06f-df73aca5e594): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:09:15 crc kubenswrapper[4822]: > logger="UnhandledError" Feb 24 09:09:15 crc kubenswrapper[4822]: W0224 09:09:15.710420 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08191894_6514_4c09_aab9_e6c8f0f52354.slice/crio-8405bd9c4b8c75e5e383398634b344f01a53e44356e50591de8c80cb5526764c WatchSource:0}: Error finding container 8405bd9c4b8c75e5e383398634b344f01a53e44356e50591de8c80cb5526764c: Status 404 returned error can't find the container with id 8405bd9c4b8c75e5e383398634b344f01a53e44356e50591de8c80cb5526764c Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.710492 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-cw98v" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.710739 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.714477 4822 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:09:15 crc kubenswrapper[4822]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Feb 24 09:09:15 crc kubenswrapper[4822]: set -uo pipefail Feb 24 09:09:15 crc kubenswrapper[4822]: Feb 24 09:09:15 crc kubenswrapper[4822]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Feb 24 09:09:15 crc kubenswrapper[4822]: Feb 24 09:09:15 crc kubenswrapper[4822]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Feb 24 09:09:15 crc kubenswrapper[4822]: HOSTS_FILE="/etc/hosts" Feb 24 09:09:15 crc kubenswrapper[4822]: TEMP_FILE="/etc/hosts.tmp" Feb 24 09:09:15 crc kubenswrapper[4822]: Feb 24 09:09:15 crc kubenswrapper[4822]: IFS=', ' read -r -a services <<< "${SERVICES}" Feb 24 09:09:15 crc kubenswrapper[4822]: Feb 24 09:09:15 crc kubenswrapper[4822]: # Make a temporary file with the old hosts file's attributes. Feb 24 09:09:15 crc kubenswrapper[4822]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Feb 24 09:09:15 crc kubenswrapper[4822]: echo "Failed to preserve hosts file. Exiting." Feb 24 09:09:15 crc kubenswrapper[4822]: exit 1 Feb 24 09:09:15 crc kubenswrapper[4822]: fi Feb 24 09:09:15 crc kubenswrapper[4822]: Feb 24 09:09:15 crc kubenswrapper[4822]: while true; do Feb 24 09:09:15 crc kubenswrapper[4822]: declare -A svc_ips Feb 24 09:09:15 crc kubenswrapper[4822]: for svc in "${services[@]}"; do Feb 24 09:09:15 crc kubenswrapper[4822]: # Fetch service IP from cluster dns if present. We make several tries Feb 24 09:09:15 crc kubenswrapper[4822]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Feb 24 09:09:15 crc kubenswrapper[4822]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Feb 24 09:09:15 crc kubenswrapper[4822]: # support UDP loadbalancers and require reaching DNS through TCP. Feb 24 09:09:15 crc kubenswrapper[4822]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 24 09:09:15 crc kubenswrapper[4822]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 24 09:09:15 crc kubenswrapper[4822]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 24 09:09:15 crc kubenswrapper[4822]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Feb 24 09:09:15 crc kubenswrapper[4822]: for i in ${!cmds[*]} Feb 24 09:09:15 crc kubenswrapper[4822]: do Feb 24 09:09:15 crc kubenswrapper[4822]: ips=($(eval "${cmds[i]}")) Feb 24 09:09:15 crc kubenswrapper[4822]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Feb 24 09:09:15 crc kubenswrapper[4822]: svc_ips["${svc}"]="${ips[@]}" Feb 24 09:09:15 crc kubenswrapper[4822]: break Feb 24 09:09:15 crc kubenswrapper[4822]: fi Feb 24 09:09:15 crc kubenswrapper[4822]: done Feb 24 09:09:15 crc kubenswrapper[4822]: done Feb 24 09:09:15 crc kubenswrapper[4822]: Feb 24 09:09:15 crc kubenswrapper[4822]: # Update /etc/hosts only if we get valid service IPs Feb 24 09:09:15 crc kubenswrapper[4822]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Feb 24 09:09:15 crc kubenswrapper[4822]: # Stale entries could exist in /etc/hosts if the service is deleted Feb 24 09:09:15 crc kubenswrapper[4822]: if [[ -n "${svc_ips[*]-}" ]]; then Feb 24 09:09:15 crc kubenswrapper[4822]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Feb 24 09:09:15 crc kubenswrapper[4822]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Feb 24 09:09:15 crc kubenswrapper[4822]: # Only continue rebuilding the hosts entries if its original content is preserved Feb 24 09:09:15 crc kubenswrapper[4822]: sleep 60 & wait Feb 24 09:09:15 crc kubenswrapper[4822]: continue Feb 24 09:09:15 crc kubenswrapper[4822]: fi Feb 24 09:09:15 crc kubenswrapper[4822]: Feb 24 09:09:15 crc kubenswrapper[4822]: # Append resolver entries for services Feb 24 09:09:15 crc kubenswrapper[4822]: rc=0 Feb 24 09:09:15 crc kubenswrapper[4822]: for svc in "${!svc_ips[@]}"; do Feb 24 09:09:15 crc kubenswrapper[4822]: for ip in ${svc_ips[${svc}]}; do Feb 24 09:09:15 crc kubenswrapper[4822]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Feb 24 09:09:15 crc kubenswrapper[4822]: done Feb 24 09:09:15 crc kubenswrapper[4822]: done Feb 24 09:09:15 crc kubenswrapper[4822]: if [[ $rc -ne 0 ]]; then Feb 24 09:09:15 crc kubenswrapper[4822]: sleep 60 & wait Feb 24 09:09:15 crc kubenswrapper[4822]: continue Feb 24 09:09:15 crc kubenswrapper[4822]: fi Feb 24 09:09:15 crc kubenswrapper[4822]: Feb 24 09:09:15 crc kubenswrapper[4822]: Feb 24 09:09:15 crc kubenswrapper[4822]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Feb 24 09:09:15 crc kubenswrapper[4822]: # Replace /etc/hosts with our modified version if needed Feb 24 09:09:15 crc kubenswrapper[4822]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Feb 24 09:09:15 crc kubenswrapper[4822]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Feb 24 09:09:15 crc kubenswrapper[4822]: fi Feb 24 09:09:15 crc kubenswrapper[4822]: sleep 60 & wait Feb 24 09:09:15 crc kubenswrapper[4822]: unset svc_ips Feb 24 09:09:15 crc kubenswrapper[4822]: done Feb 24 09:09:15 crc kubenswrapper[4822]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zl2dz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-t2gjf_openshift-dns(08191894-6514-4c09-aab9-e6c8f0f52354): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:09:15 crc kubenswrapper[4822]: > logger="UnhandledError" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.716152 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-t2gjf" podUID="08191894-6514-4c09-aab9-e6c8f0f52354" Feb 24 09:09:15 crc kubenswrapper[4822]: W0224 09:09:15.727741 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90b654a4_010b_4a5e_b2d8_d42764fcb628.slice/crio-0cde9968162b4d3de7477c30ea6434abf5485c99b68dfdb4d99715b49cfc7869 WatchSource:0}: Error finding container 0cde9968162b4d3de7477c30ea6434abf5485c99b68dfdb4d99715b49cfc7869: Status 404 returned error can't find the container with id 0cde9968162b4d3de7477c30ea6434abf5485c99b68dfdb4d99715b49cfc7869 Feb 24 09:09:15 crc kubenswrapper[4822]: W0224 09:09:15.728594 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf22e7eb7_5eca_40b1_b7b8_6683604024ba.slice/crio-e6581493c9076129298c2149d477a7e89c446393914c1f4a818cfb79087f0f24 WatchSource:0}: Error finding container e6581493c9076129298c2149d477a7e89c446393914c1f4a818cfb79087f0f24: Status 404 returned error can't find the container with id e6581493c9076129298c2149d477a7e89c446393914c1f4a818cfb79087f0f24 Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.731002 4822 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:09:15 crc kubenswrapper[4822]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Feb 24 09:09:15 crc kubenswrapper[4822]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Feb 24 09:09:15 crc kubenswrapper[4822]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g779d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-lqrzq_openshift-multus(90b654a4-010b-4a5e-b2d8-d42764fcb628): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:09:15 crc kubenswrapper[4822]: > logger="UnhandledError" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.732209 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-lqrzq" podUID="90b654a4-010b-4a5e-b2d8-d42764fcb628" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.733205 4822 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jzvv4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-cw98v_openshift-multus(f22e7eb7-5eca-40b1-b7b8-6683604024ba): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.734312 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-cw98v" podUID="f22e7eb7-5eca-40b1-b7b8-6683604024ba" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.760628 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" event={"ID":"f22e7eb7-5eca-40b1-b7b8-6683604024ba","Type":"ContainerStarted","Data":"e6581493c9076129298c2149d477a7e89c446393914c1f4a818cfb79087f0f24"} Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.762344 4822 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jzvv4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-cw98v_openshift-multus(f22e7eb7-5eca-40b1-b7b8-6683604024ba): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.763325 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" event={"ID":"72f416e6-5647-4b65-b06f-df73aca5e594","Type":"ContainerStarted","Data":"2559c6d343beb3d63a0ce41b808722d1a90cd58cb64711e91fb36365c5d898b9"} Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.764815 4822 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:09:15 crc kubenswrapper[4822]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Feb 24 09:09:15 crc kubenswrapper[4822]: apiVersion: v1 Feb 24 09:09:15 crc kubenswrapper[4822]: clusters: Feb 24 09:09:15 crc kubenswrapper[4822]: - cluster: Feb 24 09:09:15 crc kubenswrapper[4822]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Feb 24 09:09:15 crc kubenswrapper[4822]: server: https://api-int.crc.testing:6443 Feb 24 09:09:15 crc kubenswrapper[4822]: name: default-cluster Feb 24 09:09:15 crc kubenswrapper[4822]: contexts: Feb 24 09:09:15 crc kubenswrapper[4822]: - context: Feb 24 09:09:15 crc kubenswrapper[4822]: cluster: default-cluster Feb 24 09:09:15 crc kubenswrapper[4822]: namespace: default Feb 24 09:09:15 crc kubenswrapper[4822]: user: default-auth Feb 24 09:09:15 crc kubenswrapper[4822]: name: default-context Feb 24 09:09:15 crc kubenswrapper[4822]: current-context: default-context Feb 24 09:09:15 crc kubenswrapper[4822]: kind: Config Feb 24 09:09:15 crc kubenswrapper[4822]: preferences: {} Feb 24 09:09:15 crc kubenswrapper[4822]: users: Feb 24 09:09:15 crc kubenswrapper[4822]: - name: default-auth Feb 24 09:09:15 crc kubenswrapper[4822]: user: Feb 24 09:09:15 crc kubenswrapper[4822]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 24 09:09:15 crc kubenswrapper[4822]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 24 09:09:15 crc kubenswrapper[4822]: EOF Feb 24 09:09:15 crc kubenswrapper[4822]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cg8qv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-669bp_openshift-ovn-kubernetes(72f416e6-5647-4b65-b06f-df73aca5e594): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:09:15 crc kubenswrapper[4822]: > logger="UnhandledError" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.764821 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-cw98v" podUID="f22e7eb7-5eca-40b1-b7b8-6683604024ba" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.765904 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.766478 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" event={"ID":"306aba52-0b6e-4d3f-b05f-757daebc5e24","Type":"ContainerStarted","Data":"46a2e39bee1d5719a6291536d3764add233baa740cd725a8fddfe013abc5db62"} Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.767534 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1d82f395f954c0fb5ab195aca62a96417ff08b6c97913e21c040028e46df3fa4"} Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.767791 4822 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nwn4x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.768551 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-fbp47" event={"ID":"d5cc2023-21a7-4205-9492-ec1d1a0d146b","Type":"ContainerStarted","Data":"49a9ca8e0ab42eef8508482b442e370f0ad1388209e3c16db39712b71ce4e84b"} Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.770075 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.770130 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.770148 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.770172 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.770191 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:15Z","lastTransitionTime":"2026-02-24T09:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.770299 4822 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nwn4x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.771281 4822 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:09:15 crc kubenswrapper[4822]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 24 09:09:15 crc kubenswrapper[4822]: if [[ -f "/env/_master" ]]; then Feb 24 09:09:15 crc kubenswrapper[4822]: set -o allexport Feb 24 09:09:15 crc kubenswrapper[4822]: source "/env/_master" Feb 24 09:09:15 crc kubenswrapper[4822]: set +o allexport Feb 24 09:09:15 crc kubenswrapper[4822]: fi Feb 24 09:09:15 crc kubenswrapper[4822]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 24 09:09:15 crc kubenswrapper[4822]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 24 09:09:15 crc kubenswrapper[4822]: ho_enable="--enable-hybrid-overlay" Feb 24 09:09:15 crc kubenswrapper[4822]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 24 09:09:15 crc kubenswrapper[4822]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 24 09:09:15 crc kubenswrapper[4822]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 24 09:09:15 crc kubenswrapper[4822]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 24 09:09:15 crc kubenswrapper[4822]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 24 09:09:15 crc kubenswrapper[4822]: --webhook-host=127.0.0.1 \ Feb 24 09:09:15 crc kubenswrapper[4822]: --webhook-port=9743 \ Feb 24 09:09:15 crc kubenswrapper[4822]: ${ho_enable} \ Feb 24 09:09:15 crc kubenswrapper[4822]: --enable-interconnect \ Feb 24 09:09:15 crc kubenswrapper[4822]: --disable-approver \ Feb 24 09:09:15 crc kubenswrapper[4822]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 24 09:09:15 crc kubenswrapper[4822]: --wait-for-kubernetes-api=200s \ Feb 24 09:09:15 crc kubenswrapper[4822]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 24 09:09:15 crc kubenswrapper[4822]: --loglevel="${LOGLEVEL}" Feb 24 09:09:15 crc kubenswrapper[4822]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:09:15 crc kubenswrapper[4822]: > logger="UnhandledError" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.771949 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-t2gjf" event={"ID":"08191894-6514-4c09-aab9-e6c8f0f52354","Type":"ContainerStarted","Data":"8405bd9c4b8c75e5e383398634b344f01a53e44356e50591de8c80cb5526764c"} Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.772033 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.772331 4822 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:09:15 crc kubenswrapper[4822]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Feb 24 09:09:15 crc kubenswrapper[4822]: while [ true ]; Feb 24 09:09:15 crc kubenswrapper[4822]: do Feb 24 09:09:15 crc kubenswrapper[4822]: for f in $(ls /tmp/serviceca); do Feb 24 09:09:15 crc kubenswrapper[4822]: echo $f Feb 24 09:09:15 crc kubenswrapper[4822]: ca_file_path="/tmp/serviceca/${f}" Feb 24 09:09:15 crc kubenswrapper[4822]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Feb 24 09:09:15 crc kubenswrapper[4822]: reg_dir_path="/etc/docker/certs.d/${f}" Feb 24 09:09:15 crc kubenswrapper[4822]: if [ -e "${reg_dir_path}" ]; then Feb 24 09:09:15 crc kubenswrapper[4822]: cp -u $ca_file_path $reg_dir_path/ca.crt Feb 24 09:09:15 crc kubenswrapper[4822]: else Feb 24 09:09:15 crc kubenswrapper[4822]: mkdir $reg_dir_path Feb 24 09:09:15 crc kubenswrapper[4822]: cp $ca_file_path $reg_dir_path/ca.crt Feb 24 09:09:15 crc kubenswrapper[4822]: fi Feb 24 09:09:15 crc kubenswrapper[4822]: done Feb 24 09:09:15 crc kubenswrapper[4822]: for d in $(ls /etc/docker/certs.d); do Feb 24 09:09:15 crc kubenswrapper[4822]: echo $d Feb 24 09:09:15 crc kubenswrapper[4822]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Feb 24 09:09:15 crc kubenswrapper[4822]: reg_conf_path="/tmp/serviceca/${dp}" Feb 24 09:09:15 crc kubenswrapper[4822]: if [ ! -e "${reg_conf_path}" ]; then Feb 24 09:09:15 crc kubenswrapper[4822]: rm -rf /etc/docker/certs.d/$d Feb 24 09:09:15 crc kubenswrapper[4822]: fi Feb 24 09:09:15 crc kubenswrapper[4822]: done Feb 24 09:09:15 crc kubenswrapper[4822]: sleep 60 & wait ${!} Feb 24 09:09:15 crc kubenswrapper[4822]: done Feb 24 09:09:15 crc kubenswrapper[4822]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9qvql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-fbp47_openshift-image-registry(d5cc2023-21a7-4205-9492-ec1d1a0d146b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:09:15 crc kubenswrapper[4822]: > logger="UnhandledError" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.773279 4822 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:09:15 crc kubenswrapper[4822]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 24 09:09:15 crc kubenswrapper[4822]: if [[ -f "/env/_master" ]]; then Feb 24 09:09:15 crc kubenswrapper[4822]: set -o allexport Feb 24 09:09:15 crc kubenswrapper[4822]: source "/env/_master" Feb 24 09:09:15 crc kubenswrapper[4822]: set +o allexport Feb 24 09:09:15 crc kubenswrapper[4822]: fi Feb 24 09:09:15 crc kubenswrapper[4822]: Feb 24 09:09:15 crc kubenswrapper[4822]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 24 09:09:15 crc kubenswrapper[4822]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 24 09:09:15 crc kubenswrapper[4822]: --disable-webhook \ Feb 24 09:09:15 crc kubenswrapper[4822]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 24 09:09:15 crc kubenswrapper[4822]: --loglevel="${LOGLEVEL}" Feb 24 09:09:15 crc kubenswrapper[4822]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:09:15 crc kubenswrapper[4822]: > logger="UnhandledError" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.773507 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-fbp47" podUID="d5cc2023-21a7-4205-9492-ec1d1a0d146b" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.773576 4822 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:09:15 crc kubenswrapper[4822]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Feb 24 09:09:15 crc kubenswrapper[4822]: set -uo pipefail Feb 24 09:09:15 crc kubenswrapper[4822]: Feb 24 09:09:15 crc kubenswrapper[4822]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Feb 24 09:09:15 crc kubenswrapper[4822]: Feb 24 09:09:15 crc kubenswrapper[4822]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Feb 24 09:09:15 crc kubenswrapper[4822]: HOSTS_FILE="/etc/hosts" Feb 24 09:09:15 crc kubenswrapper[4822]: TEMP_FILE="/etc/hosts.tmp" Feb 24 09:09:15 crc kubenswrapper[4822]: Feb 24 09:09:15 crc kubenswrapper[4822]: IFS=', ' read -r -a services <<< "${SERVICES}" Feb 24 09:09:15 crc kubenswrapper[4822]: Feb 24 09:09:15 crc kubenswrapper[4822]: # Make a temporary file with the old hosts file's attributes. Feb 24 09:09:15 crc kubenswrapper[4822]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Feb 24 09:09:15 crc kubenswrapper[4822]: echo "Failed to preserve hosts file. Exiting." Feb 24 09:09:15 crc kubenswrapper[4822]: exit 1 Feb 24 09:09:15 crc kubenswrapper[4822]: fi Feb 24 09:09:15 crc kubenswrapper[4822]: Feb 24 09:09:15 crc kubenswrapper[4822]: while true; do Feb 24 09:09:15 crc kubenswrapper[4822]: declare -A svc_ips Feb 24 09:09:15 crc kubenswrapper[4822]: for svc in "${services[@]}"; do Feb 24 09:09:15 crc kubenswrapper[4822]: # Fetch service IP from cluster dns if present. We make several tries Feb 24 09:09:15 crc kubenswrapper[4822]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Feb 24 09:09:15 crc kubenswrapper[4822]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Feb 24 09:09:15 crc kubenswrapper[4822]: # support UDP loadbalancers and require reaching DNS through TCP. Feb 24 09:09:15 crc kubenswrapper[4822]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 24 09:09:15 crc kubenswrapper[4822]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 24 09:09:15 crc kubenswrapper[4822]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 24 09:09:15 crc kubenswrapper[4822]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Feb 24 09:09:15 crc kubenswrapper[4822]: for i in ${!cmds[*]} Feb 24 09:09:15 crc kubenswrapper[4822]: do Feb 24 09:09:15 crc kubenswrapper[4822]: ips=($(eval "${cmds[i]}")) Feb 24 09:09:15 crc kubenswrapper[4822]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Feb 24 09:09:15 crc kubenswrapper[4822]: svc_ips["${svc}"]="${ips[@]}" Feb 24 09:09:15 crc kubenswrapper[4822]: break Feb 24 09:09:15 crc kubenswrapper[4822]: fi Feb 24 09:09:15 crc kubenswrapper[4822]: done Feb 24 09:09:15 crc kubenswrapper[4822]: done Feb 24 09:09:15 crc kubenswrapper[4822]: Feb 24 09:09:15 crc kubenswrapper[4822]: # Update /etc/hosts only if we get valid service IPs Feb 24 09:09:15 crc kubenswrapper[4822]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Feb 24 09:09:15 crc kubenswrapper[4822]: # Stale entries could exist in /etc/hosts if the service is deleted Feb 24 09:09:15 crc kubenswrapper[4822]: if [[ -n "${svc_ips[*]-}" ]]; then Feb 24 09:09:15 crc kubenswrapper[4822]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Feb 24 09:09:15 crc kubenswrapper[4822]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Feb 24 09:09:15 crc kubenswrapper[4822]: # Only continue rebuilding the hosts entries if its original content is preserved Feb 24 09:09:15 crc kubenswrapper[4822]: sleep 60 & wait Feb 24 09:09:15 crc kubenswrapper[4822]: continue Feb 24 09:09:15 crc kubenswrapper[4822]: fi Feb 24 09:09:15 crc kubenswrapper[4822]: Feb 24 09:09:15 crc kubenswrapper[4822]: # Append resolver entries for services Feb 24 09:09:15 crc kubenswrapper[4822]: rc=0 Feb 24 09:09:15 crc kubenswrapper[4822]: for svc in "${!svc_ips[@]}"; do Feb 24 09:09:15 crc kubenswrapper[4822]: for ip in ${svc_ips[${svc}]}; do Feb 24 09:09:15 crc kubenswrapper[4822]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Feb 24 09:09:15 crc kubenswrapper[4822]: done Feb 24 09:09:15 crc kubenswrapper[4822]: done Feb 24 09:09:15 crc kubenswrapper[4822]: if [[ $rc -ne 0 ]]; then Feb 24 09:09:15 crc kubenswrapper[4822]: sleep 60 & wait Feb 24 09:09:15 crc kubenswrapper[4822]: continue Feb 24 09:09:15 crc kubenswrapper[4822]: fi Feb 24 09:09:15 crc kubenswrapper[4822]: Feb 24 09:09:15 crc kubenswrapper[4822]: Feb 24 09:09:15 crc kubenswrapper[4822]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Feb 24 09:09:15 crc kubenswrapper[4822]: # Replace /etc/hosts with our modified version if needed Feb 24 09:09:15 crc kubenswrapper[4822]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Feb 24 09:09:15 crc kubenswrapper[4822]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Feb 24 09:09:15 crc kubenswrapper[4822]: fi Feb 24 09:09:15 crc kubenswrapper[4822]: sleep 60 & wait Feb 24 09:09:15 crc kubenswrapper[4822]: unset svc_ips Feb 24 09:09:15 crc kubenswrapper[4822]: done Feb 24 09:09:15 crc kubenswrapper[4822]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zl2dz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-t2gjf_openshift-dns(08191894-6514-4c09-aab9-e6c8f0f52354): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:09:15 crc kubenswrapper[4822]: > logger="UnhandledError" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.774544 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.774624 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-t2gjf" podUID="08191894-6514-4c09-aab9-e6c8f0f52354" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.774808 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lqrzq" event={"ID":"90b654a4-010b-4a5e-b2d8-d42764fcb628","Type":"ContainerStarted","Data":"0cde9968162b4d3de7477c30ea6434abf5485c99b68dfdb4d99715b49cfc7869"} Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.776528 4822 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:09:15 crc kubenswrapper[4822]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Feb 24 09:09:15 crc kubenswrapper[4822]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Feb 24 09:09:15 crc kubenswrapper[4822]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g779d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-lqrzq_openshift-multus(90b654a4-010b-4a5e-b2d8-d42764fcb628): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:09:15 crc kubenswrapper[4822]: > logger="UnhandledError" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.776738 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.777241 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"1653c8ceea11310c86113f1b82d157ddc95a5866b6c24f8384ad4457c5427301"} Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.777624 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-lqrzq" podUID="90b654a4-010b-4a5e-b2d8-d42764fcb628" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.778571 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"6e5579d04c8dcc4aaa721cc25f78c4d92fa96f8dfce020337d13fb6341436132"} Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.779327 4822 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.780345 4822 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:09:15 crc kubenswrapper[4822]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 24 09:09:15 crc kubenswrapper[4822]: set -o allexport Feb 24 09:09:15 crc kubenswrapper[4822]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 24 09:09:15 crc kubenswrapper[4822]: source /etc/kubernetes/apiserver-url.env Feb 24 09:09:15 crc kubenswrapper[4822]: else Feb 24 09:09:15 crc kubenswrapper[4822]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 24 09:09:15 crc kubenswrapper[4822]: exit 1 Feb 24 09:09:15 crc kubenswrapper[4822]: fi Feb 24 09:09:15 crc kubenswrapper[4822]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 24 09:09:15 crc kubenswrapper[4822]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:09:15 crc kubenswrapper[4822]: > logger="UnhandledError" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.780537 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.781341 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" event={"ID":"0fada5a7-935e-4bd3-931b-082fea67a9ec","Type":"ContainerStarted","Data":"01dca5ca4c4398727e8e0080b9ffc47a26d4c05b2049bc49ac73c1996e522857"} Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.781403 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.783836 4822 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:09:15 crc kubenswrapper[4822]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Feb 24 09:09:15 crc kubenswrapper[4822]: set -euo pipefail Feb 24 09:09:15 crc kubenswrapper[4822]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Feb 24 09:09:15 crc kubenswrapper[4822]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Feb 24 09:09:15 crc kubenswrapper[4822]: # As the secret mount is optional we must wait for the files to be present. Feb 24 09:09:15 crc kubenswrapper[4822]: # The service is created in monitor.yaml and this is created in sdn.yaml. Feb 24 09:09:15 crc kubenswrapper[4822]: TS=$(date +%s) Feb 24 09:09:15 crc kubenswrapper[4822]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Feb 24 09:09:15 crc kubenswrapper[4822]: HAS_LOGGED_INFO=0 Feb 24 09:09:15 crc kubenswrapper[4822]: Feb 24 09:09:15 crc kubenswrapper[4822]: log_missing_certs(){ Feb 24 09:09:15 crc kubenswrapper[4822]: CUR_TS=$(date +%s) Feb 24 09:09:15 crc kubenswrapper[4822]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Feb 24 09:09:15 crc kubenswrapper[4822]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Feb 24 09:09:15 crc kubenswrapper[4822]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Feb 24 09:09:15 crc kubenswrapper[4822]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Feb 24 09:09:15 crc kubenswrapper[4822]: HAS_LOGGED_INFO=1 Feb 24 09:09:15 crc kubenswrapper[4822]: fi Feb 24 09:09:15 crc kubenswrapper[4822]: } Feb 24 09:09:15 crc kubenswrapper[4822]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Feb 24 09:09:15 crc kubenswrapper[4822]: log_missing_certs Feb 24 09:09:15 crc kubenswrapper[4822]: sleep 5 Feb 24 09:09:15 crc kubenswrapper[4822]: done Feb 24 09:09:15 crc kubenswrapper[4822]: Feb 24 09:09:15 crc kubenswrapper[4822]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Feb 24 09:09:15 crc kubenswrapper[4822]: exec /usr/bin/kube-rbac-proxy \ Feb 24 09:09:15 crc kubenswrapper[4822]: --logtostderr \ Feb 24 09:09:15 crc kubenswrapper[4822]: --secure-listen-address=:9108 \ Feb 24 09:09:15 crc kubenswrapper[4822]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Feb 24 09:09:15 crc kubenswrapper[4822]: --upstream=http://127.0.0.1:29108/ \ Feb 24 09:09:15 crc kubenswrapper[4822]: --tls-private-key-file=${TLS_PK} \ Feb 24 09:09:15 crc kubenswrapper[4822]: --tls-cert-file=${TLS_CERT} Feb 24 09:09:15 crc kubenswrapper[4822]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-chdwv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-gmrxl_openshift-ovn-kubernetes(0fada5a7-935e-4bd3-931b-082fea67a9ec): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:09:15 crc kubenswrapper[4822]: > logger="UnhandledError" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.786131 4822 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:09:15 crc kubenswrapper[4822]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 24 09:09:15 crc kubenswrapper[4822]: if [[ -f "/env/_master" ]]; then Feb 24 09:09:15 crc kubenswrapper[4822]: set -o allexport Feb 24 09:09:15 crc kubenswrapper[4822]: source "/env/_master" Feb 24 09:09:15 crc kubenswrapper[4822]: set +o allexport Feb 24 09:09:15 crc kubenswrapper[4822]: fi Feb 24 09:09:15 crc kubenswrapper[4822]: Feb 24 09:09:15 crc kubenswrapper[4822]: ovn_v4_join_subnet_opt= Feb 24 09:09:15 crc kubenswrapper[4822]: if [[ "" != "" ]]; then Feb 24 09:09:15 crc kubenswrapper[4822]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Feb 24 09:09:15 crc kubenswrapper[4822]: fi Feb 24 09:09:15 crc kubenswrapper[4822]: ovn_v6_join_subnet_opt= Feb 24 09:09:15 crc kubenswrapper[4822]: if [[ "" != "" ]]; then Feb 24 09:09:15 crc kubenswrapper[4822]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Feb 24 09:09:15 crc kubenswrapper[4822]: fi Feb 24 09:09:15 crc kubenswrapper[4822]: Feb 24 09:09:15 crc kubenswrapper[4822]: ovn_v4_transit_switch_subnet_opt= Feb 24 09:09:15 crc kubenswrapper[4822]: if [[ "" != "" ]]; then Feb 24 09:09:15 crc kubenswrapper[4822]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Feb 24 09:09:15 crc kubenswrapper[4822]: fi Feb 24 09:09:15 crc kubenswrapper[4822]: ovn_v6_transit_switch_subnet_opt= Feb 24 09:09:15 crc kubenswrapper[4822]: if [[ "" != "" ]]; then Feb 24 09:09:15 crc kubenswrapper[4822]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Feb 24 09:09:15 crc kubenswrapper[4822]: fi Feb 24 09:09:15 crc kubenswrapper[4822]: Feb 24 09:09:15 crc kubenswrapper[4822]: dns_name_resolver_enabled_flag= Feb 24 09:09:15 crc kubenswrapper[4822]: if [[ "false" == "true" ]]; then Feb 24 09:09:15 crc kubenswrapper[4822]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Feb 24 09:09:15 crc kubenswrapper[4822]: fi Feb 24 09:09:15 crc kubenswrapper[4822]: Feb 24 09:09:15 crc kubenswrapper[4822]: persistent_ips_enabled_flag= Feb 24 09:09:15 crc kubenswrapper[4822]: if [[ "true" == "true" ]]; then Feb 24 09:09:15 crc kubenswrapper[4822]: persistent_ips_enabled_flag="--enable-persistent-ips" Feb 24 09:09:15 crc kubenswrapper[4822]: fi Feb 24 09:09:15 crc kubenswrapper[4822]: Feb 24 09:09:15 crc kubenswrapper[4822]: # This is needed so that converting clusters from GA to TP Feb 24 09:09:15 crc kubenswrapper[4822]: # will rollout control plane pods as well Feb 24 09:09:15 crc kubenswrapper[4822]: network_segmentation_enabled_flag= Feb 24 09:09:15 crc kubenswrapper[4822]: multi_network_enabled_flag= Feb 24 09:09:15 crc kubenswrapper[4822]: if [[ "true" == "true" ]]; then Feb 24 09:09:15 crc kubenswrapper[4822]: multi_network_enabled_flag="--enable-multi-network" Feb 24 09:09:15 crc kubenswrapper[4822]: network_segmentation_enabled_flag="--enable-network-segmentation" Feb 24 09:09:15 crc kubenswrapper[4822]: fi Feb 24 09:09:15 crc kubenswrapper[4822]: Feb 24 09:09:15 crc kubenswrapper[4822]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Feb 24 09:09:15 crc kubenswrapper[4822]: exec /usr/bin/ovnkube \ Feb 24 09:09:15 crc kubenswrapper[4822]: --enable-interconnect \ Feb 24 09:09:15 crc kubenswrapper[4822]: --init-cluster-manager "${K8S_NODE}" \ Feb 24 09:09:15 crc kubenswrapper[4822]: --config-file=/run/ovnkube-config/ovnkube.conf \ Feb 24 09:09:15 crc kubenswrapper[4822]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Feb 24 09:09:15 crc kubenswrapper[4822]: --metrics-bind-address "127.0.0.1:29108" \ Feb 24 09:09:15 crc kubenswrapper[4822]: --metrics-enable-pprof \ Feb 24 09:09:15 crc kubenswrapper[4822]: --metrics-enable-config-duration \ Feb 24 09:09:15 crc kubenswrapper[4822]: ${ovn_v4_join_subnet_opt} \ Feb 24 09:09:15 crc kubenswrapper[4822]: ${ovn_v6_join_subnet_opt} \ Feb 24 09:09:15 crc kubenswrapper[4822]: ${ovn_v4_transit_switch_subnet_opt} \ Feb 24 09:09:15 crc kubenswrapper[4822]: ${ovn_v6_transit_switch_subnet_opt} \ Feb 24 09:09:15 crc kubenswrapper[4822]: ${dns_name_resolver_enabled_flag} \ Feb 24 09:09:15 crc kubenswrapper[4822]: ${persistent_ips_enabled_flag} \ Feb 24 09:09:15 crc kubenswrapper[4822]: ${multi_network_enabled_flag} \ Feb 24 09:09:15 crc kubenswrapper[4822]: ${network_segmentation_enabled_flag} Feb 24 09:09:15 crc kubenswrapper[4822]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-chdwv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-gmrxl_openshift-ovn-kubernetes(0fada5a7-935e-4bd3-931b-082fea67a9ec): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:09:15 crc kubenswrapper[4822]: > logger="UnhandledError" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.787264 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" podUID="0fada5a7-935e-4bd3-931b-082fea67a9ec" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.787249 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.798699 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.813564 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.824771 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.840125 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.855446 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.868644 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.873445 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.873499 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.873519 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.873543 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.873560 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:15Z","lastTransitionTime":"2026-02-24T09:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.880387 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.905129 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.924136 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.938775 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.953150 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.963779 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.972809 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.976383 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.976437 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.976458 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.976524 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.976544 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:15Z","lastTransitionTime":"2026-02-24T09:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.981425 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.990373 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.995470 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.995583 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:09:16.995560253 +0000 UTC m=+79.383322831 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.995622 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.995651 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.995683 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:09:15 crc kubenswrapper[4822]: I0224 09:09:15.995723 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.995815 4822 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.995830 4822 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.995835 4822 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.995854 4822 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.995887 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:09:16.995872032 +0000 UTC m=+79.383634590 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.995906 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 09:09:16.995897722 +0000 UTC m=+79.383660280 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.995900 4822 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.995957 4822 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.995999 4822 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.996018 4822 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.996072 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:09:16.996038066 +0000 UTC m=+79.383800654 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:09:15 crc kubenswrapper[4822]: E0224 09:09:15.996111 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 09:09:16.996092317 +0000 UTC m=+79.383854975 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.002424 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.012994 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.023605 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.033552 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.044988 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.061745 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.071241 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.078667 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.078729 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.078751 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.078781 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.078801 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:16Z","lastTransitionTime":"2026-02-24T09:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.082681 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.093323 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.107457 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.149607 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.182369 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.182427 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.182439 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.182458 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.182472 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:16Z","lastTransitionTime":"2026-02-24T09:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.197110 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f51aff12-328f-4b79-8dbb-2079510f45dc-metrics-certs\") pod \"network-metrics-daemon-htbq4\" (UID: \"f51aff12-328f-4b79-8dbb-2079510f45dc\") " pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:09:16 crc kubenswrapper[4822]: E0224 09:09:16.197358 4822 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:09:16 crc kubenswrapper[4822]: E0224 09:09:16.197467 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f51aff12-328f-4b79-8dbb-2079510f45dc-metrics-certs podName:f51aff12-328f-4b79-8dbb-2079510f45dc nodeName:}" failed. No retries permitted until 2026-02-24 09:09:17.197440712 +0000 UTC m=+79.585203350 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f51aff12-328f-4b79-8dbb-2079510f45dc-metrics-certs") pod "network-metrics-daemon-htbq4" (UID: "f51aff12-328f-4b79-8dbb-2079510f45dc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.290044 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.290110 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.290136 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.290159 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.290174 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:16Z","lastTransitionTime":"2026-02-24T09:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.341911 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.342634 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.344080 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.345047 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.346148 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.346739 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.347452 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.348537 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.349261 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.350365 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.350954 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.352184 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.352751 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.353367 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.354412 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.355081 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.356156 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.356640 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.357320 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.358575 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.359114 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.360232 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.360769 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.361981 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.362562 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.363334 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.364556 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.365129 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.366640 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.367195 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.368218 4822 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.368344 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.370250 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.371063 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.372209 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.375545 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.377657 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.380180 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.382005 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.384414 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.385380 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.387562 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.388836 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.394087 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.394156 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.394179 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.394210 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.394230 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:16Z","lastTransitionTime":"2026-02-24T09:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.395389 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.396538 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.399084 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.400275 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.402853 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.404087 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.406077 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.407343 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.408702 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.410976 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.412081 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.497748 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.497817 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.497835 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.497863 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.497884 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:16Z","lastTransitionTime":"2026-02-24T09:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.600579 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.600626 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.600642 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.600666 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.600684 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:16Z","lastTransitionTime":"2026-02-24T09:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.703859 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.703955 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.703974 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.703997 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.704014 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:16Z","lastTransitionTime":"2026-02-24T09:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.806830 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.806895 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.806945 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.806973 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.806991 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:16Z","lastTransitionTime":"2026-02-24T09:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.910078 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.910507 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.910648 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.910786 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:16 crc kubenswrapper[4822]: I0224 09:09:16.910950 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:16Z","lastTransitionTime":"2026-02-24T09:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.004237 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.004542 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.004674 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.004779 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.004957 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:09:17 crc kubenswrapper[4822]: E0224 09:09:17.005148 4822 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:09:17 crc kubenswrapper[4822]: E0224 09:09:17.005278 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:09:19.005261323 +0000 UTC m=+81.393023881 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:09:17 crc kubenswrapper[4822]: E0224 09:09:17.005775 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:09:19.005761907 +0000 UTC m=+81.393524465 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:09:17 crc kubenswrapper[4822]: E0224 09:09:17.005972 4822 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:09:17 crc kubenswrapper[4822]: E0224 09:09:17.006078 4822 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:09:17 crc kubenswrapper[4822]: E0224 09:09:17.006170 4822 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:09:17 crc kubenswrapper[4822]: E0224 09:09:17.006270 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 09:09:19.00625798 +0000 UTC m=+81.394020548 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:09:17 crc kubenswrapper[4822]: E0224 09:09:17.006385 4822 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:09:17 crc kubenswrapper[4822]: E0224 09:09:17.006492 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:09:19.006480716 +0000 UTC m=+81.394243284 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:09:17 crc kubenswrapper[4822]: E0224 09:09:17.006631 4822 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:09:17 crc kubenswrapper[4822]: E0224 09:09:17.006718 4822 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:09:17 crc kubenswrapper[4822]: E0224 09:09:17.006797 4822 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:09:17 crc kubenswrapper[4822]: E0224 09:09:17.006893 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 09:09:19.006882896 +0000 UTC m=+81.394645464 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.014059 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.014189 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.014270 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.014348 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.014422 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:17Z","lastTransitionTime":"2026-02-24T09:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.117879 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.117961 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.118014 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.118041 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.118057 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:17Z","lastTransitionTime":"2026-02-24T09:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.207801 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f51aff12-328f-4b79-8dbb-2079510f45dc-metrics-certs\") pod \"network-metrics-daemon-htbq4\" (UID: \"f51aff12-328f-4b79-8dbb-2079510f45dc\") " pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:09:17 crc kubenswrapper[4822]: E0224 09:09:17.208111 4822 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:09:17 crc kubenswrapper[4822]: E0224 09:09:17.208228 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f51aff12-328f-4b79-8dbb-2079510f45dc-metrics-certs podName:f51aff12-328f-4b79-8dbb-2079510f45dc nodeName:}" failed. No retries permitted until 2026-02-24 09:09:19.2082009 +0000 UTC m=+81.595963478 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f51aff12-328f-4b79-8dbb-2079510f45dc-metrics-certs") pod "network-metrics-daemon-htbq4" (UID: "f51aff12-328f-4b79-8dbb-2079510f45dc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.220960 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.221019 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.221036 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.221060 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.221079 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:17Z","lastTransitionTime":"2026-02-24T09:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.324098 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.324158 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.324181 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.324204 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.324222 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:17Z","lastTransitionTime":"2026-02-24T09:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.336676 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.336713 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.336682 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.337009 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:09:17 crc kubenswrapper[4822]: E0224 09:09:17.337182 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:09:17 crc kubenswrapper[4822]: E0224 09:09:17.337331 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:09:17 crc kubenswrapper[4822]: E0224 09:09:17.337532 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:09:17 crc kubenswrapper[4822]: E0224 09:09:17.337729 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.353970 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.353986 4822 scope.go:117] "RemoveContainer" containerID="e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344" Feb 24 09:09:17 crc kubenswrapper[4822]: E0224 09:09:17.354373 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.427409 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.427464 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.427475 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.427496 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.427509 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:17Z","lastTransitionTime":"2026-02-24T09:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.530686 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.530765 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.530777 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.530822 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.530840 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:17Z","lastTransitionTime":"2026-02-24T09:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.633802 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.633885 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.633902 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.633969 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.634000 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:17Z","lastTransitionTime":"2026-02-24T09:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.737319 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.737600 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.737625 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.737652 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.737670 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:17Z","lastTransitionTime":"2026-02-24T09:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.788137 4822 scope.go:117] "RemoveContainer" containerID="e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344" Feb 24 09:09:17 crc kubenswrapper[4822]: E0224 09:09:17.788408 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.841110 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.841172 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.841189 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.841214 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.841232 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:17Z","lastTransitionTime":"2026-02-24T09:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.944345 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.944438 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.944462 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.944497 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:17 crc kubenswrapper[4822]: I0224 09:09:17.944518 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:17Z","lastTransitionTime":"2026-02-24T09:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.047445 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.047501 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.047519 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.047544 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.047561 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:18Z","lastTransitionTime":"2026-02-24T09:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.150990 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.151067 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.151085 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.151114 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.151132 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:18Z","lastTransitionTime":"2026-02-24T09:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.253982 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.254043 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.254062 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.254087 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.254106 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:18Z","lastTransitionTime":"2026-02-24T09:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.355255 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.357122 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.357182 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.357203 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.357224 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.357242 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:18Z","lastTransitionTime":"2026-02-24T09:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.371400 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.385130 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.403425 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2f6d18-d7b6-4c05-a90c-e6bf83a58862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:09:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:09:01.996048 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:09:01.996167 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:09:01.997001 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3601362715/tls.crt::/tmp/serving-cert-3601362715/tls.key\\\\\\\"\\\\nI0224 09:09:02.405618 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:09:02.409894 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:09:02.409954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:09:02.410013 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:09:02.410028 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:09:02.418769 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:09:02.418799 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:09:02.418814 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418828 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418838 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:09:02.418846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:09:02.418853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:09:02.418860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:09:02.423036 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.421386 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.436149 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.449543 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.460378 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.460507 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.460529 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.460563 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.460581 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:18Z","lastTransitionTime":"2026-02-24T09:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.464401 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.480868 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.507545 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.525411 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.541512 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.554679 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.563813 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.563881 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.563905 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.563989 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.564013 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:18Z","lastTransitionTime":"2026-02-24T09:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.581749 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.597781 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.667506 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.667578 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.667597 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.667625 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.667642 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:18Z","lastTransitionTime":"2026-02-24T09:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.770777 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.770847 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.770867 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.770893 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.770938 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:18Z","lastTransitionTime":"2026-02-24T09:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.874281 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.874379 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.874413 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.874447 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.874473 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:18Z","lastTransitionTime":"2026-02-24T09:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.978003 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.978073 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.978090 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.978118 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:18 crc kubenswrapper[4822]: I0224 09:09:18.978135 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:18Z","lastTransitionTime":"2026-02-24T09:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.028171 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.028347 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:09:19 crc kubenswrapper[4822]: E0224 09:09:19.028415 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:09:23.028382403 +0000 UTC m=+85.416144981 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.028453 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.028494 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.028574 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:09:19 crc kubenswrapper[4822]: E0224 09:09:19.028601 4822 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:09:19 crc kubenswrapper[4822]: E0224 09:09:19.028618 4822 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:09:19 crc kubenswrapper[4822]: E0224 09:09:19.028638 4822 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:09:19 crc kubenswrapper[4822]: E0224 09:09:19.028661 4822 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:09:19 crc kubenswrapper[4822]: E0224 09:09:19.028689 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:09:23.028670931 +0000 UTC m=+85.416433509 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:09:19 crc kubenswrapper[4822]: E0224 09:09:19.028736 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 09:09:23.028712502 +0000 UTC m=+85.416475080 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:09:19 crc kubenswrapper[4822]: E0224 09:09:19.028762 4822 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:09:19 crc kubenswrapper[4822]: E0224 09:09:19.028811 4822 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:09:19 crc kubenswrapper[4822]: E0224 09:09:19.028834 4822 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:09:19 crc kubenswrapper[4822]: E0224 09:09:19.028770 4822 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:09:19 crc kubenswrapper[4822]: E0224 09:09:19.028960 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 09:09:23.028889557 +0000 UTC m=+85.416652135 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:09:19 crc kubenswrapper[4822]: E0224 09:09:19.029018 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:09:23.028992489 +0000 UTC m=+85.416755127 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.081097 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.081185 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.081203 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.081227 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.081245 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:19Z","lastTransitionTime":"2026-02-24T09:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.184178 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.184237 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.184254 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.184278 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.184297 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:19Z","lastTransitionTime":"2026-02-24T09:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.231604 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f51aff12-328f-4b79-8dbb-2079510f45dc-metrics-certs\") pod \"network-metrics-daemon-htbq4\" (UID: \"f51aff12-328f-4b79-8dbb-2079510f45dc\") " pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:09:19 crc kubenswrapper[4822]: E0224 09:09:19.231841 4822 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:09:19 crc kubenswrapper[4822]: E0224 09:09:19.231959 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f51aff12-328f-4b79-8dbb-2079510f45dc-metrics-certs podName:f51aff12-328f-4b79-8dbb-2079510f45dc nodeName:}" failed. No retries permitted until 2026-02-24 09:09:23.231902135 +0000 UTC m=+85.619664723 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f51aff12-328f-4b79-8dbb-2079510f45dc-metrics-certs") pod "network-metrics-daemon-htbq4" (UID: "f51aff12-328f-4b79-8dbb-2079510f45dc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.287639 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.287748 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.287774 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.287805 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.287826 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:19Z","lastTransitionTime":"2026-02-24T09:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.336994 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.337034 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.337089 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:09:19 crc kubenswrapper[4822]: E0224 09:09:19.337247 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:09:19 crc kubenswrapper[4822]: E0224 09:09:19.337339 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:09:19 crc kubenswrapper[4822]: E0224 09:09:19.337450 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.337700 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:09:19 crc kubenswrapper[4822]: E0224 09:09:19.337834 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.391883 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.391981 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.392000 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.392023 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.392040 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:19Z","lastTransitionTime":"2026-02-24T09:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.495236 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.495361 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.495386 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.495417 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.495439 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:19Z","lastTransitionTime":"2026-02-24T09:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.598890 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.598987 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.599006 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.599039 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.599063 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:19Z","lastTransitionTime":"2026-02-24T09:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.659936 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.659972 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.659982 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.660000 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.660013 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:19Z","lastTransitionTime":"2026-02-24T09:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:19 crc kubenswrapper[4822]: E0224 09:09:19.675682 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.681223 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.681298 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.681317 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.681340 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.681355 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:19Z","lastTransitionTime":"2026-02-24T09:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:19 crc kubenswrapper[4822]: E0224 09:09:19.697541 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.702192 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.702232 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.702246 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.702267 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.702284 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:19Z","lastTransitionTime":"2026-02-24T09:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:19 crc kubenswrapper[4822]: E0224 09:09:19.713216 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.717217 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.717247 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.717259 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.717275 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.717287 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:19Z","lastTransitionTime":"2026-02-24T09:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:19 crc kubenswrapper[4822]: E0224 09:09:19.731860 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.736501 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.736545 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.736564 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.736585 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.736603 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:19Z","lastTransitionTime":"2026-02-24T09:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:19 crc kubenswrapper[4822]: E0224 09:09:19.748063 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:19 crc kubenswrapper[4822]: E0224 09:09:19.748300 4822 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.750684 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.750732 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.750750 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.750778 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.750795 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:19Z","lastTransitionTime":"2026-02-24T09:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.853770 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.853841 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.853858 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.853885 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.853903 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:19Z","lastTransitionTime":"2026-02-24T09:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.957129 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.957172 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.957188 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.957211 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:19 crc kubenswrapper[4822]: I0224 09:09:19.957228 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:19Z","lastTransitionTime":"2026-02-24T09:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.060082 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.060142 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.060159 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.060184 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.060201 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:20Z","lastTransitionTime":"2026-02-24T09:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.163109 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.163173 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.163190 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.163218 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.163236 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:20Z","lastTransitionTime":"2026-02-24T09:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.266501 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.266605 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.266627 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.266651 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.266667 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:20Z","lastTransitionTime":"2026-02-24T09:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.370206 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.370264 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.370282 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.370304 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.370321 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:20Z","lastTransitionTime":"2026-02-24T09:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.472448 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.472499 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.472517 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.472540 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.472556 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:20Z","lastTransitionTime":"2026-02-24T09:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.544684 4822 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.576047 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.576113 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.576137 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.576165 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.576182 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:20Z","lastTransitionTime":"2026-02-24T09:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.679378 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.680334 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.680483 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.680623 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.680747 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:20Z","lastTransitionTime":"2026-02-24T09:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.783674 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.783721 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.783738 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.783758 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.783776 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:20Z","lastTransitionTime":"2026-02-24T09:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.886778 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.886844 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.886902 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.886965 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.886991 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:20Z","lastTransitionTime":"2026-02-24T09:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.990465 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.990535 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.990554 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.990578 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:20 crc kubenswrapper[4822]: I0224 09:09:20.990594 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:20Z","lastTransitionTime":"2026-02-24T09:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.093562 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.093624 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.093646 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.093675 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.093698 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:21Z","lastTransitionTime":"2026-02-24T09:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.196865 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.196950 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.196967 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.196993 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.197010 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:21Z","lastTransitionTime":"2026-02-24T09:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.300432 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.300487 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.300503 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.300526 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.300546 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:21Z","lastTransitionTime":"2026-02-24T09:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.336562 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.336605 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.336651 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:09:21 crc kubenswrapper[4822]: E0224 09:09:21.336787 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.336807 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:09:21 crc kubenswrapper[4822]: E0224 09:09:21.336902 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:09:21 crc kubenswrapper[4822]: E0224 09:09:21.337014 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:09:21 crc kubenswrapper[4822]: E0224 09:09:21.337065 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.402759 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.402999 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.403064 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.403127 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.403192 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:21Z","lastTransitionTime":"2026-02-24T09:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.506269 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.506557 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.506626 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.506700 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.506774 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:21Z","lastTransitionTime":"2026-02-24T09:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.609423 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.609693 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.609845 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.609943 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.610012 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:21Z","lastTransitionTime":"2026-02-24T09:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.712823 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.712871 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.712888 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.712938 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.712957 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:21Z","lastTransitionTime":"2026-02-24T09:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.815901 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.815982 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.816001 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.816026 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.816043 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:21Z","lastTransitionTime":"2026-02-24T09:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.919458 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.919515 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.919532 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.919558 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:21 crc kubenswrapper[4822]: I0224 09:09:21.919575 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:21Z","lastTransitionTime":"2026-02-24T09:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.022785 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.022851 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.022868 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.022896 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.022953 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:22Z","lastTransitionTime":"2026-02-24T09:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.126950 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.127031 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.127055 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.127090 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.127113 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:22Z","lastTransitionTime":"2026-02-24T09:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.230655 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.230744 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.230763 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.230791 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.230809 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:22Z","lastTransitionTime":"2026-02-24T09:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.334047 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.334584 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.334856 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.335105 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.335274 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:22Z","lastTransitionTime":"2026-02-24T09:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.439622 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.439699 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.439716 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.439742 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.439763 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:22Z","lastTransitionTime":"2026-02-24T09:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.543025 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.543091 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.543110 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.543137 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.543160 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:22Z","lastTransitionTime":"2026-02-24T09:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.551755 4822 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.646335 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.646386 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.646404 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.646428 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.646447 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:22Z","lastTransitionTime":"2026-02-24T09:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.749823 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.749892 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.749948 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.749982 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.750005 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:22Z","lastTransitionTime":"2026-02-24T09:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.853886 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.853976 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.853995 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.854024 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.854043 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:22Z","lastTransitionTime":"2026-02-24T09:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.956480 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.956560 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.956579 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.956606 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:22 crc kubenswrapper[4822]: I0224 09:09:22.956623 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:22Z","lastTransitionTime":"2026-02-24T09:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.059261 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.059311 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.059374 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.059399 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.059416 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:23Z","lastTransitionTime":"2026-02-24T09:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.073324 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.073511 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:09:23 crc kubenswrapper[4822]: E0224 09:09:23.073526 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:09:31.073498394 +0000 UTC m=+93.461260972 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.073586 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.073627 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.073685 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:09:23 crc kubenswrapper[4822]: E0224 09:09:23.073729 4822 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:09:23 crc kubenswrapper[4822]: E0224 09:09:23.073786 4822 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:09:23 crc kubenswrapper[4822]: E0224 09:09:23.073812 4822 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:09:23 crc kubenswrapper[4822]: E0224 09:09:23.073811 4822 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:09:23 crc kubenswrapper[4822]: E0224 09:09:23.073847 4822 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:09:23 crc kubenswrapper[4822]: E0224 09:09:23.073893 4822 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:09:23 crc kubenswrapper[4822]: E0224 09:09:23.073968 4822 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:09:23 crc kubenswrapper[4822]: E0224 09:09:23.073960 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:09:31.073888194 +0000 UTC m=+93.461650782 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:09:23 crc kubenswrapper[4822]: E0224 09:09:23.073995 4822 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:09:23 crc kubenswrapper[4822]: E0224 09:09:23.074039 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 09:09:31.074016278 +0000 UTC m=+93.461778926 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:09:23 crc kubenswrapper[4822]: E0224 09:09:23.074078 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:09:31.074060119 +0000 UTC m=+93.461822837 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:09:23 crc kubenswrapper[4822]: E0224 09:09:23.074110 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 09:09:31.07409572 +0000 UTC m=+93.461858418 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.168389 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.168450 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.168468 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.168494 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.168511 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:23Z","lastTransitionTime":"2026-02-24T09:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.270951 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.271011 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.271035 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.271064 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.271086 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:23Z","lastTransitionTime":"2026-02-24T09:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.276029 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f51aff12-328f-4b79-8dbb-2079510f45dc-metrics-certs\") pod \"network-metrics-daemon-htbq4\" (UID: \"f51aff12-328f-4b79-8dbb-2079510f45dc\") " pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:09:23 crc kubenswrapper[4822]: E0224 09:09:23.276248 4822 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:09:23 crc kubenswrapper[4822]: E0224 09:09:23.276343 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f51aff12-328f-4b79-8dbb-2079510f45dc-metrics-certs podName:f51aff12-328f-4b79-8dbb-2079510f45dc nodeName:}" failed. No retries permitted until 2026-02-24 09:09:31.276313607 +0000 UTC m=+93.664076205 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f51aff12-328f-4b79-8dbb-2079510f45dc-metrics-certs") pod "network-metrics-daemon-htbq4" (UID: "f51aff12-328f-4b79-8dbb-2079510f45dc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.337165 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.337199 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.337210 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.337165 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:09:23 crc kubenswrapper[4822]: E0224 09:09:23.337373 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:09:23 crc kubenswrapper[4822]: E0224 09:09:23.337622 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:09:23 crc kubenswrapper[4822]: E0224 09:09:23.337755 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:09:23 crc kubenswrapper[4822]: E0224 09:09:23.337887 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.375732 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.375772 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.375783 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.375798 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.375809 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:23Z","lastTransitionTime":"2026-02-24T09:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.479697 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.479761 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.479780 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.479806 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.479826 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:23Z","lastTransitionTime":"2026-02-24T09:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.583033 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.583082 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.583099 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.583123 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.583139 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:23Z","lastTransitionTime":"2026-02-24T09:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.686328 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.686388 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.686410 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.686440 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.686460 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:23Z","lastTransitionTime":"2026-02-24T09:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.789906 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.790022 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.790041 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.790068 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.790085 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:23Z","lastTransitionTime":"2026-02-24T09:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.893576 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.893640 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.893663 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.893694 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.893716 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:23Z","lastTransitionTime":"2026-02-24T09:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.997033 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.997102 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.997125 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.997155 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:23 crc kubenswrapper[4822]: I0224 09:09:23.997177 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:23Z","lastTransitionTime":"2026-02-24T09:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.100092 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.100160 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.100177 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.100203 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.100221 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:24Z","lastTransitionTime":"2026-02-24T09:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.203887 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.204008 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.204027 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.204478 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.204565 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:24Z","lastTransitionTime":"2026-02-24T09:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.307850 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.307941 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.307962 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.307987 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.308004 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:24Z","lastTransitionTime":"2026-02-24T09:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.411060 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.411141 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.411159 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.411183 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.411199 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:24Z","lastTransitionTime":"2026-02-24T09:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.514661 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.514721 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.514737 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.514760 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.514777 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:24Z","lastTransitionTime":"2026-02-24T09:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.618162 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.618249 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.618273 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.618298 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.618317 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:24Z","lastTransitionTime":"2026-02-24T09:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.721830 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.721902 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.721953 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.721981 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.721999 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:24Z","lastTransitionTime":"2026-02-24T09:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.824698 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.824766 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.824783 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.824807 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.824825 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:24Z","lastTransitionTime":"2026-02-24T09:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.927488 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.927554 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.927575 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.927600 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:24 crc kubenswrapper[4822]: I0224 09:09:24.927617 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:24Z","lastTransitionTime":"2026-02-24T09:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.031247 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.031310 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.031326 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.031509 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.031547 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:25Z","lastTransitionTime":"2026-02-24T09:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.135375 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.135463 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.135486 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.135512 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.135530 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:25Z","lastTransitionTime":"2026-02-24T09:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.238659 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.238742 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.238769 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.238803 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.238824 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:25Z","lastTransitionTime":"2026-02-24T09:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.336516 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.336562 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.336608 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.336526 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:09:25 crc kubenswrapper[4822]: E0224 09:09:25.336677 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:09:25 crc kubenswrapper[4822]: E0224 09:09:25.336804 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:09:25 crc kubenswrapper[4822]: E0224 09:09:25.337045 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:09:25 crc kubenswrapper[4822]: E0224 09:09:25.337177 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.341905 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.342010 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.342034 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.342107 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.342142 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:25Z","lastTransitionTime":"2026-02-24T09:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.445017 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.445076 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.445098 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.445123 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.445142 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:25Z","lastTransitionTime":"2026-02-24T09:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.548394 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.548428 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.548439 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.548456 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.548468 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:25Z","lastTransitionTime":"2026-02-24T09:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.651476 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.651511 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.651522 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.651538 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.651547 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:25Z","lastTransitionTime":"2026-02-24T09:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.754368 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.754442 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.754465 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.754492 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.754510 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:25Z","lastTransitionTime":"2026-02-24T09:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.857207 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.857263 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.857279 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.857304 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.857321 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:25Z","lastTransitionTime":"2026-02-24T09:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.961013 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.961068 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.961086 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.961113 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:25 crc kubenswrapper[4822]: I0224 09:09:25.961130 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:25Z","lastTransitionTime":"2026-02-24T09:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.064789 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.064949 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.064973 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.064998 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.065017 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:26Z","lastTransitionTime":"2026-02-24T09:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.168552 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.168601 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.168616 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.168638 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.168654 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:26Z","lastTransitionTime":"2026-02-24T09:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.276053 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.276652 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.276688 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.276715 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.276737 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:26Z","lastTransitionTime":"2026-02-24T09:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.380440 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.380501 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.380522 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.380551 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.380571 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:26Z","lastTransitionTime":"2026-02-24T09:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.484412 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.484468 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.484486 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.484510 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.484528 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:26Z","lastTransitionTime":"2026-02-24T09:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.587763 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.587842 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.587867 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.587898 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.587955 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:26Z","lastTransitionTime":"2026-02-24T09:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.691570 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.691651 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.691673 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.691698 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.691715 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:26Z","lastTransitionTime":"2026-02-24T09:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.795096 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.795146 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.795160 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.795179 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.795191 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:26Z","lastTransitionTime":"2026-02-24T09:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.815659 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-t2gjf" event={"ID":"08191894-6514-4c09-aab9-e6c8f0f52354","Type":"ContainerStarted","Data":"7acc17a2035f017ceaa15e39eb12606e38a5902fdc87c6671ac2628ebaa4a4fc"} Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.828516 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.841499 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.852708 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7acc17a2035f017ceaa15e39eb12606e38a5902fdc87c6671ac2628ebaa4a4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.869483 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2f6d18-d7b6-4c05-a90c-e6bf83a58862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:09:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:09:01.996048 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:09:01.996167 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:09:01.997001 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3601362715/tls.crt::/tmp/serving-cert-3601362715/tls.key\\\\\\\"\\\\nI0224 09:09:02.405618 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:09:02.409894 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:09:02.409954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:09:02.410013 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:09:02.410028 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:09:02.418769 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:09:02.418799 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:09:02.418814 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418828 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418838 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:09:02.418846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:09:02.418853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:09:02.418860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:09:02.423036 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.887895 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.898028 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.898083 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.898101 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.898125 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.898141 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:26Z","lastTransitionTime":"2026-02-24T09:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.903320 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.915503 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.931092 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.954325 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.971370 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.985399 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:26 crc kubenswrapper[4822]: I0224 09:09:26.998532 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.000637 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.000692 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.000712 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.000740 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.000759 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:27Z","lastTransitionTime":"2026-02-24T09:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.013643 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.030191 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.045536 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.104418 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.104462 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.104478 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.104502 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.104519 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:27Z","lastTransitionTime":"2026-02-24T09:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.206823 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.207207 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.207224 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.207246 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.207262 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:27Z","lastTransitionTime":"2026-02-24T09:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.310353 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.310419 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.310443 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.310467 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.310493 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:27Z","lastTransitionTime":"2026-02-24T09:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.336673 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.337015 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.337015 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.337101 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:09:27 crc kubenswrapper[4822]: E0224 09:09:27.337387 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:09:27 crc kubenswrapper[4822]: E0224 09:09:27.337548 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:09:27 crc kubenswrapper[4822]: E0224 09:09:27.337659 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:09:27 crc kubenswrapper[4822]: E0224 09:09:27.337805 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.414063 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.414128 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.414152 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.414182 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.414206 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:27Z","lastTransitionTime":"2026-02-24T09:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.517903 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.518017 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.518036 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.518063 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.518092 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:27Z","lastTransitionTime":"2026-02-24T09:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.620664 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.620713 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.620734 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.620760 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.620778 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:27Z","lastTransitionTime":"2026-02-24T09:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.723982 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.724043 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.724056 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.724075 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.724087 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:27Z","lastTransitionTime":"2026-02-24T09:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.822243 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-fbp47" event={"ID":"d5cc2023-21a7-4205-9492-ec1d1a0d146b","Type":"ContainerStarted","Data":"22174b11b108be065985e9809bc93e49dcb17e237f5054e2e5771b82e7e42f70"} Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.825453 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"72cfa68249a7e55b890cfa5284ef4c5e10dae317eeb6be34edc9d45803723e6e"} Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.827417 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.827476 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.827494 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.827519 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.827577 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:27Z","lastTransitionTime":"2026-02-24T09:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.835058 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7acc17a2035f017ceaa15e39eb12606e38a5902fdc87c6671ac2628ebaa4a4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.853488 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2f6d18-d7b6-4c05-a90c-e6bf83a58862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:09:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:09:01.996048 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:09:01.996167 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:09:01.997001 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3601362715/tls.crt::/tmp/serving-cert-3601362715/tls.key\\\\\\\"\\\\nI0224 09:09:02.405618 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:09:02.409894 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:09:02.409954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:09:02.410013 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:09:02.410028 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:09:02.418769 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:09:02.418799 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:09:02.418814 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418828 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418838 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:09:02.418846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:09:02.418853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:09:02.418860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:09:02.423036 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.870816 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.886275 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.897573 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22174b11b108be065985e9809bc93e49dcb17e237f5054e2e5771b82e7e42f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.908883 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.921498 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.930552 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.930600 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.930617 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.930642 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.930661 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:27Z","lastTransitionTime":"2026-02-24T09:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.936058 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.953226 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.967951 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:27 crc kubenswrapper[4822]: I0224 09:09:27.980823 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.008081 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.027087 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.032782 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.032850 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.032874 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.032904 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.032998 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:28Z","lastTransitionTime":"2026-02-24T09:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.045988 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.061089 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.089411 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.107297 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.122691 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.135744 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.135816 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.135835 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.135862 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.135882 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:28Z","lastTransitionTime":"2026-02-24T09:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.137473 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.149511 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.166156 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72cfa68249a7e55b890cfa5284ef4c5e10dae317eeb6be34edc9d45803723e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.180406 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.192777 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.205393 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.218047 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7acc17a2035f017ceaa15e39eb12606e38a5902fdc87c6671ac2628ebaa4a4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.237405 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2f6d18-d7b6-4c05-a90c-e6bf83a58862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:09:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:09:01.996048 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:09:01.996167 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:09:01.997001 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3601362715/tls.crt::/tmp/serving-cert-3601362715/tls.key\\\\\\\"\\\\nI0224 09:09:02.405618 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:09:02.409894 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:09:02.409954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:09:02.410013 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:09:02.410028 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:09:02.418769 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:09:02.418799 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:09:02.418814 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418828 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418838 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:09:02.418846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:09:02.418853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:09:02.418860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:09:02.423036 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.238946 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.239021 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.239046 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.239077 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.239102 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:28Z","lastTransitionTime":"2026-02-24T09:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.254046 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.269532 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.281551 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22174b11b108be065985e9809bc93e49dcb17e237f5054e2e5771b82e7e42f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.297503 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.341472 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.341515 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.341531 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.341553 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.341572 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:28Z","lastTransitionTime":"2026-02-24T09:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.352024 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.382847 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.401842 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.416488 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.430073 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.445686 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.445756 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.445781 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.445816 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.445842 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:28Z","lastTransitionTime":"2026-02-24T09:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.449516 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72cfa68249a7e55b890cfa5284ef4c5e10dae317eeb6be34edc9d45803723e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.462452 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.473692 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22174b11b108be065985e9809bc93e49dcb17e237f5054e2e5771b82e7e42f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.489534 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.503966 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.513304 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7acc17a2035f017ceaa15e39eb12606e38a5902fdc87c6671ac2628ebaa4a4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.528817 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2f6d18-d7b6-4c05-a90c-e6bf83a58862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:09:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:09:01.996048 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:09:01.996167 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:09:01.997001 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3601362715/tls.crt::/tmp/serving-cert-3601362715/tls.key\\\\\\\"\\\\nI0224 09:09:02.405618 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:09:02.409894 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:09:02.409954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:09:02.410013 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:09:02.410028 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:09:02.418769 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:09:02.418799 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:09:02.418814 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418828 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418838 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:09:02.418846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:09:02.418853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:09:02.418860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:09:02.423036 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.543446 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.557528 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.557582 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.557604 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.557633 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.557653 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:28Z","lastTransitionTime":"2026-02-24T09:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.557673 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.570151 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.659847 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.659894 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.659926 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.659947 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.659961 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:28Z","lastTransitionTime":"2026-02-24T09:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.763470 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.763528 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.763547 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.763572 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.763590 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:28Z","lastTransitionTime":"2026-02-24T09:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.831728 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"888ca80a64aca876c966570ab2bca27d2f1d4e80fe42443a170382671bf5a002"} Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.831793 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"11a3802aceae54e33e3c19d855be2bd17cdb18cc4ff7110425685551e2fbff21"} Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.835082 4822 generic.go:334] "Generic (PLEG): container finished" podID="72f416e6-5647-4b65-b06f-df73aca5e594" containerID="ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4" exitCode=0 Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.835135 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" event={"ID":"72f416e6-5647-4b65-b06f-df73aca5e594","Type":"ContainerDied","Data":"ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4"} Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.865155 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.870891 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.871077 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.871118 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.871151 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.871171 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:28Z","lastTransitionTime":"2026-02-24T09:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.890563 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.907115 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.921858 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.952048 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.978032 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.978086 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.978100 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.978312 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.978332 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:28Z","lastTransitionTime":"2026-02-24T09:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:28 crc kubenswrapper[4822]: I0224 09:09:28.981101 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72cfa68249a7e55b890cfa5284ef4c5e10dae317eeb6be34edc9d45803723e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.007271 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.019177 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.030354 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.050543 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7acc17a2035f017ceaa15e39eb12606e38a5902fdc87c6671ac2628ebaa4a4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:29Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.070025 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2f6d18-d7b6-4c05-a90c-e6bf83a58862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:09:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:09:01.996048 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:09:01.996167 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:09:01.997001 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3601362715/tls.crt::/tmp/serving-cert-3601362715/tls.key\\\\\\\"\\\\nI0224 09:09:02.405618 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:09:02.409894 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:09:02.409954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:09:02.410013 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:09:02.410028 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:09:02.418769 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:09:02.418799 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:09:02.418814 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418828 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418838 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:09:02.418846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:09:02.418853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:09:02.418860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:09:02.423036 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:29Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.081722 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.081763 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.081772 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.081789 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.081799 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:29Z","lastTransitionTime":"2026-02-24T09:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.085817 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888ca80a64aca876c966570ab2bca27d2f1d4e80fe42443a170382671bf5a002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a3802aceae54e33e3c19d855be2bd17cdb18cc4ff7110425685551e2fbff21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:29Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.098634 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:29Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.109975 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22174b11b108be065985e9809bc93e49dcb17e237f5054e2e5771b82e7e42f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:29Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.119973 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:29Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.133787 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:29Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.147810 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:29Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.164830 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:29Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.182058 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:29Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.184098 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.184140 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.184161 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.184187 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.184205 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:29Z","lastTransitionTime":"2026-02-24T09:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.203398 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:29Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.219718 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:29Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.236888 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72cfa68249a7e55b890cfa5284ef4c5e10dae317eeb6be34edc9d45803723e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:29Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.252459 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888ca80a64aca876c966570ab2bca27d2f1d4e80fe42443a170382671bf5a002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a3802aceae54e33e3c19d855be2bd17cdb18cc4ff7110425685551e2fbff21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:29Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.266408 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:29Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.279793 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22174b11b108be065985e9809bc93e49dcb17e237f5054e2e5771b82e7e42f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:29Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.286607 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.286639 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.286650 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.286665 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.286676 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:29Z","lastTransitionTime":"2026-02-24T09:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.296235 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:29Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.310852 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:29Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.325924 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7acc17a2035f017ceaa15e39eb12606e38a5902fdc87c6671ac2628ebaa4a4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:29Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.336959 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.336994 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.337115 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:09:29 crc kubenswrapper[4822]: E0224 09:09:29.337112 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.337143 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:09:29 crc kubenswrapper[4822]: E0224 09:09:29.337231 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:09:29 crc kubenswrapper[4822]: E0224 09:09:29.337544 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:09:29 crc kubenswrapper[4822]: E0224 09:09:29.337606 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.338627 4822 scope.go:117] "RemoveContainer" containerID="e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344" Feb 24 09:09:29 crc kubenswrapper[4822]: E0224 09:09:29.338862 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.342513 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2f6d18-d7b6-4c05-a90c-e6bf83a58862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:09:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:09:01.996048 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:09:01.996167 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:09:01.997001 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3601362715/tls.crt::/tmp/serving-cert-3601362715/tls.key\\\\\\\"\\\\nI0224 09:09:02.405618 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:09:02.409894 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:09:02.409954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:09:02.410013 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:09:02.410028 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:09:02.418769 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:09:02.418799 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:09:02.418814 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418828 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418838 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:09:02.418846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:09:02.418853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:09:02.418860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:09:02.423036 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:29Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.361386 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:29Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.391069 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.391108 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.391118 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.391134 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.391145 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:29Z","lastTransitionTime":"2026-02-24T09:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.493225 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.493515 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.493527 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.493544 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.493556 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:29Z","lastTransitionTime":"2026-02-24T09:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.596699 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.596770 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.596783 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.596806 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.596822 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:29Z","lastTransitionTime":"2026-02-24T09:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.699839 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.699903 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.699935 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.699959 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.699971 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:29Z","lastTransitionTime":"2026-02-24T09:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.774143 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.774216 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.774232 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.774257 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.774275 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:29Z","lastTransitionTime":"2026-02-24T09:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:29 crc kubenswrapper[4822]: E0224 09:09:29.795594 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:29Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.799906 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.799980 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.800000 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.800022 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.800041 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:29Z","lastTransitionTime":"2026-02-24T09:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:29 crc kubenswrapper[4822]: E0224 09:09:29.820758 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:29Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.825073 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.825141 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.825162 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.825189 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.825208 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:29Z","lastTransitionTime":"2026-02-24T09:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.846674 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" event={"ID":"72f416e6-5647-4b65-b06f-df73aca5e594","Type":"ContainerStarted","Data":"df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821"} Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.846762 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" event={"ID":"72f416e6-5647-4b65-b06f-df73aca5e594","Type":"ContainerStarted","Data":"a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a"} Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.846792 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" event={"ID":"72f416e6-5647-4b65-b06f-df73aca5e594","Type":"ContainerStarted","Data":"717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8"} Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.846816 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" event={"ID":"72f416e6-5647-4b65-b06f-df73aca5e594","Type":"ContainerStarted","Data":"effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897"} Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.846839 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" event={"ID":"72f416e6-5647-4b65-b06f-df73aca5e594","Type":"ContainerStarted","Data":"539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226"} Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.846862 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" event={"ID":"72f416e6-5647-4b65-b06f-df73aca5e594","Type":"ContainerStarted","Data":"8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f"} Feb 24 09:09:29 crc kubenswrapper[4822]: E0224 09:09:29.854028 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:29Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.858650 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.858700 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.858719 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.858745 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.858763 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:29Z","lastTransitionTime":"2026-02-24T09:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:29 crc kubenswrapper[4822]: E0224 09:09:29.879155 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:29Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.884019 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.884076 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.884098 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.884124 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.884142 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:29Z","lastTransitionTime":"2026-02-24T09:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:29 crc kubenswrapper[4822]: E0224 09:09:29.904330 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:29Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:29 crc kubenswrapper[4822]: E0224 09:09:29.904550 4822 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.907299 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.907371 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.907388 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.907414 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:29 crc kubenswrapper[4822]: I0224 09:09:29.907432 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:29Z","lastTransitionTime":"2026-02-24T09:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.011318 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.011403 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.011427 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.011456 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.011474 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:30Z","lastTransitionTime":"2026-02-24T09:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.114871 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.114986 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.115005 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.115081 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.115101 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:30Z","lastTransitionTime":"2026-02-24T09:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.218332 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.218410 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.218432 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.218465 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.218488 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:30Z","lastTransitionTime":"2026-02-24T09:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.321057 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.321108 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.321125 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.321149 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.321166 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:30Z","lastTransitionTime":"2026-02-24T09:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.424306 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.424343 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.424351 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.424366 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.424375 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:30Z","lastTransitionTime":"2026-02-24T09:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.526819 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.526868 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.526880 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.526899 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.526933 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:30Z","lastTransitionTime":"2026-02-24T09:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.630402 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.630437 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.630446 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.630461 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.630470 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:30Z","lastTransitionTime":"2026-02-24T09:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.732978 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.733037 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.733056 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.733088 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.733109 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:30Z","lastTransitionTime":"2026-02-24T09:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.836104 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.836178 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.836199 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.836261 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.836288 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:30Z","lastTransitionTime":"2026-02-24T09:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.852023 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" event={"ID":"f22e7eb7-5eca-40b1-b7b8-6683604024ba","Type":"ContainerStarted","Data":"5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd"} Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.854288 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"4525470935b533581c4bf2d12d8cc18425b8fc363f2b501ab7773882832f112d"} Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.856816 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" event={"ID":"306aba52-0b6e-4d3f-b05f-757daebc5e24","Type":"ContainerStarted","Data":"714c471f8ec139e9610d9c3e7942cc91fc7c3e09d6b9d83a9498acc804542540"} Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.857077 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" event={"ID":"306aba52-0b6e-4d3f-b05f-757daebc5e24","Type":"ContainerStarted","Data":"00d72e2a80f6b15e7b1f86206da1682dfae29bb1bf1dcca20ca38556ba3fc2c0"} Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.859564 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" event={"ID":"0fada5a7-935e-4bd3-931b-082fea67a9ec","Type":"ContainerStarted","Data":"d090a61be59e6ddbda21197c89b0a80c4dc7ef91c5cb3e3c589c5d21b7018bd5"} Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.859646 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" event={"ID":"0fada5a7-935e-4bd3-931b-082fea67a9ec","Type":"ContainerStarted","Data":"576c14462c61f59afdca39fedf2d6ddd7b7c3a779afcc6edbeab01c2fad8c6c2"} Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.874481 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888ca80a64aca876c966570ab2bca27d2f1d4e80fe42443a170382671bf5a002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a3802aceae54e33e3c19d855be2bd17cdb18cc4ff7110425685551e2fbff21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:30Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.905859 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:30Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.925628 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22174b11b108be065985e9809bc93e49dcb17e237f5054e2e5771b82e7e42f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:30Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.939018 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.939098 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.939122 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.939153 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.939176 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:30Z","lastTransitionTime":"2026-02-24T09:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.944195 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:30Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.961478 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:30Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:30 crc kubenswrapper[4822]: I0224 09:09:30.980996 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7acc17a2035f017ceaa15e39eb12606e38a5902fdc87c6671ac2628ebaa4a4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:30Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.001718 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2f6d18-d7b6-4c05-a90c-e6bf83a58862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:09:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:09:01.996048 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:09:01.996167 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:09:01.997001 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3601362715/tls.crt::/tmp/serving-cert-3601362715/tls.key\\\\\\\"\\\\nI0224 09:09:02.405618 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:09:02.409894 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:09:02.409954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:09:02.410013 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:09:02.410028 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:09:02.418769 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:09:02.418799 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:09:02.418814 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418828 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418838 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:09:02.418846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:09:02.418853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:09:02.418860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:09:02.423036 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:30Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.036483 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:31Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.042655 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.042711 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.042721 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.042742 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.042753 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:31Z","lastTransitionTime":"2026-02-24T09:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.060172 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:31Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.078471 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:31Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.098111 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:31Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.130329 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:31Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.145074 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.145129 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.145142 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.145161 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.145174 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:31Z","lastTransitionTime":"2026-02-24T09:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.148436 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:31Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.163582 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:09:31 crc kubenswrapper[4822]: E0224 09:09:31.163827 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:09:47.163784129 +0000 UTC m=+109.551546727 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.164057 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.164229 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.164351 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.164466 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:09:31 crc kubenswrapper[4822]: E0224 09:09:31.164251 4822 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:09:31 crc kubenswrapper[4822]: E0224 09:09:31.164722 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:09:47.164703983 +0000 UTC m=+109.552466541 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:09:31 crc kubenswrapper[4822]: E0224 09:09:31.164458 4822 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:09:31 crc kubenswrapper[4822]: E0224 09:09:31.164544 4822 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:09:31 crc kubenswrapper[4822]: E0224 09:09:31.164983 4822 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:09:31 crc kubenswrapper[4822]: E0224 09:09:31.165003 4822 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:09:31 crc kubenswrapper[4822]: E0224 09:09:31.164546 4822 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:09:31 crc kubenswrapper[4822]: E0224 09:09:31.165112 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 09:09:47.165091433 +0000 UTC m=+109.552853991 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:09:31 crc kubenswrapper[4822]: E0224 09:09:31.165151 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:09:47.165141794 +0000 UTC m=+109.552904352 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:09:31 crc kubenswrapper[4822]: E0224 09:09:31.164954 4822 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:09:31 crc kubenswrapper[4822]: E0224 09:09:31.165367 4822 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:09:31 crc kubenswrapper[4822]: E0224 09:09:31.165485 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 09:09:47.165473823 +0000 UTC m=+109.553236381 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.166434 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72cfa68249a7e55b890cfa5284ef4c5e10dae317eeb6be34edc9d45803723e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:31Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.181873 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:31Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.203149 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888ca80a64aca876c966570ab2bca27d2f1d4e80fe42443a170382671bf5a002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a3802aceae54e33e3c19d855be2bd17cdb18cc4ff7110425685551e2fbff21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:31Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.220656 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:31Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.238174 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22174b11b108be065985e9809bc93e49dcb17e237f5054e2e5771b82e7e42f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:31Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.247771 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.247954 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.248061 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.248196 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.248320 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:31Z","lastTransitionTime":"2026-02-24T09:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.254568 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:31Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.273036 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714c471f8ec139e9610d9c3e7942cc91fc7c3e09d6b9d83a9498acc804542540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d72e2a80f6b15e7b1f86206da1682dfae29bb1bf1dcca20ca38556ba3fc2c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:31Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.287554 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7acc17a2035f017ceaa15e39eb12606e38a5902fdc87c6671ac2628ebaa4a4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:31Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.307425 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2f6d18-d7b6-4c05-a90c-e6bf83a58862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:09:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:09:01.996048 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:09:01.996167 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:09:01.997001 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3601362715/tls.crt::/tmp/serving-cert-3601362715/tls.key\\\\\\\"\\\\nI0224 09:09:02.405618 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:09:02.409894 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:09:02.409954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:09:02.410013 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:09:02.410028 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:09:02.418769 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:09:02.418799 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:09:02.418814 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418828 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418838 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:09:02.418846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:09:02.418853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:09:02.418860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:09:02.423036 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:31Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.322350 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:31Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.337213 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.337300 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.337425 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.337557 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:09:31 crc kubenswrapper[4822]: E0224 09:09:31.337705 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:09:31 crc kubenswrapper[4822]: E0224 09:09:31.337877 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:09:31 crc kubenswrapper[4822]: E0224 09:09:31.337977 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:09:31 crc kubenswrapper[4822]: E0224 09:09:31.338031 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.339493 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:31Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.351203 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.351275 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.351301 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.351336 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.351360 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:31Z","lastTransitionTime":"2026-02-24T09:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.360526 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4525470935b533581c4bf2d12d8cc18425b8fc363f2b501ab7773882832f112d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:31Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.365961 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f51aff12-328f-4b79-8dbb-2079510f45dc-metrics-certs\") pod \"network-metrics-daemon-htbq4\" (UID: \"f51aff12-328f-4b79-8dbb-2079510f45dc\") " pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:09:31 crc kubenswrapper[4822]: E0224 09:09:31.366199 4822 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:09:31 crc kubenswrapper[4822]: E0224 09:09:31.366327 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f51aff12-328f-4b79-8dbb-2079510f45dc-metrics-certs podName:f51aff12-328f-4b79-8dbb-2079510f45dc nodeName:}" failed. No retries permitted until 2026-02-24 09:09:47.366298274 +0000 UTC m=+109.754060822 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f51aff12-328f-4b79-8dbb-2079510f45dc-metrics-certs") pod "network-metrics-daemon-htbq4" (UID: "f51aff12-328f-4b79-8dbb-2079510f45dc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.381319 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://576c14462c61f59afdca39fedf2d6ddd7b7c3a779afcc6edbeab01c2fad8c6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d090a61be59e6ddbda21197c89b0a80c4dc7ef91c5cb3e3c589c5d21b7018bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:31Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.409741 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:31Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.433419 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:31Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.451893 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72cfa68249a7e55b890cfa5284ef4c5e10dae317eeb6be34edc9d45803723e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:31Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.453713 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.453754 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.453765 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.453782 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.453796 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:31Z","lastTransitionTime":"2026-02-24T09:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.470562 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:31Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.556777 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.556810 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.556821 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.556837 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.556850 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:31Z","lastTransitionTime":"2026-02-24T09:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.659597 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.659638 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.659654 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.659674 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.659686 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:31Z","lastTransitionTime":"2026-02-24T09:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.762786 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.762847 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.762864 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.762964 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.762997 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:31Z","lastTransitionTime":"2026-02-24T09:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.865793 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.866418 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.866430 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.866469 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.866517 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:31Z","lastTransitionTime":"2026-02-24T09:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.867105 4822 generic.go:334] "Generic (PLEG): container finished" podID="f22e7eb7-5eca-40b1-b7b8-6683604024ba" containerID="5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd" exitCode=0 Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.867233 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" event={"ID":"f22e7eb7-5eca-40b1-b7b8-6683604024ba","Type":"ContainerDied","Data":"5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd"} Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.869738 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lqrzq" event={"ID":"90b654a4-010b-4a5e-b2d8-d42764fcb628","Type":"ContainerStarted","Data":"96124054c109e534a338814cf39c5948da05e8ec3dd7e62985d440380310c97a"} Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.895332 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72cfa68249a7e55b890cfa5284ef4c5e10dae317eeb6be34edc9d45803723e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:31Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.916655 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:31Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.935327 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714c471f8ec139e9610d9c3e7942cc91fc7c3e09d6b9d83a9498acc804542540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d72e2a80f6b15e7b1f86206da1682dfae29bb1bf1dcca20ca38556ba3fc2c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:31Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.957141 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7acc17a2035f017ceaa15e39eb12606e38a5902fdc87c6671ac2628ebaa4a4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:31Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.971427 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.971487 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.971509 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.971541 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.971563 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:31Z","lastTransitionTime":"2026-02-24T09:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:31 crc kubenswrapper[4822]: I0224 09:09:31.978498 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2f6d18-d7b6-4c05-a90c-e6bf83a58862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:09:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:09:01.996048 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:09:01.996167 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:09:01.997001 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3601362715/tls.crt::/tmp/serving-cert-3601362715/tls.key\\\\\\\"\\\\nI0224 09:09:02.405618 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:09:02.409894 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:09:02.409954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:09:02.410013 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:09:02.410028 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:09:02.418769 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:09:02.418799 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:09:02.418814 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418828 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418838 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:09:02.418846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:09:02.418853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:09:02.418860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:09:02.423036 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:31Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.001503 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888ca80a64aca876c966570ab2bca27d2f1d4e80fe42443a170382671bf5a002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a3802aceae54e33e3c19d855be2bd17cdb18cc4ff7110425685551e2fbff21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:31Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.026469 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:32Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.040208 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22174b11b108be065985e9809bc93e49dcb17e237f5054e2e5771b82e7e42f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:32Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.054335 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:32Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.071312 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:32Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.074661 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.074694 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.074706 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.074725 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.074737 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:32Z","lastTransitionTime":"2026-02-24T09:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.089871 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:32Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.104267 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:32Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.125618 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4525470935b533581c4bf2d12d8cc18425b8fc363f2b501ab7773882832f112d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:32Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.145300 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://576c14462c61f59afdca39fedf2d6ddd7b7c3a779afcc6edbeab01c2fad8c6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d090a61be59e6ddbda21197c89b0a80c4dc7ef91c5cb3e3c589c5d21b7018bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:32Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.165223 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:32Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.176876 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.176941 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.176955 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.176973 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.176985 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:32Z","lastTransitionTime":"2026-02-24T09:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.185475 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://576c14462c61f59afdca39fedf2d6ddd7b7c3a779afcc6edbeab01c2fad8c6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d090a61be59e6ddbda21197c89b0a80c4dc7ef91c5cb3e3c589c5d21b7018bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:32Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.221619 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:32Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.254589 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:32Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.274122 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:32Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.279656 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.279716 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.279735 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.279778 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.279798 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:32Z","lastTransitionTime":"2026-02-24T09:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.295182 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4525470935b533581c4bf2d12d8cc18425b8fc363f2b501ab7773882832f112d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:32Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.316984 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72cfa68249a7e55b890cfa5284ef4c5e10dae317eeb6be34edc9d45803723e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:32Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.335745 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:32Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.351026 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22174b11b108be065985e9809bc93e49dcb17e237f5054e2e5771b82e7e42f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:32Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.365798 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:32Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.381749 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.381786 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.381795 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.381811 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.381821 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:32Z","lastTransitionTime":"2026-02-24T09:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.385313 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714c471f8ec139e9610d9c3e7942cc91fc7c3e09d6b9d83a9498acc804542540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d72e2a80f6b15e7b1f86206da1682dfae29bb1bf1dcca20ca38556ba3fc2c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:32Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.399529 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7acc17a2035f017ceaa15e39eb12606e38a5902fdc87c6671ac2628ebaa4a4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:32Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.416198 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2f6d18-d7b6-4c05-a90c-e6bf83a58862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:09:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:09:01.996048 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:09:01.996167 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:09:01.997001 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3601362715/tls.crt::/tmp/serving-cert-3601362715/tls.key\\\\\\\"\\\\nI0224 09:09:02.405618 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:09:02.409894 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:09:02.409954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:09:02.410013 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:09:02.410028 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:09:02.418769 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:09:02.418799 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:09:02.418814 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418828 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418838 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:09:02.418846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:09:02.418853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:09:02.418860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:09:02.423036 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:32Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.435748 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888ca80a64aca876c966570ab2bca27d2f1d4e80fe42443a170382671bf5a002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a3802aceae54e33e3c19d855be2bd17cdb18cc4ff7110425685551e2fbff21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:32Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.454393 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:32Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.472692 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96124054c109e534a338814cf39c5948da05e8ec3dd7e62985d440380310c97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:32Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.484869 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.484938 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.484954 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.484974 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.484988 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:32Z","lastTransitionTime":"2026-02-24T09:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.588187 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.588261 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.588282 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.588307 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.588326 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:32Z","lastTransitionTime":"2026-02-24T09:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.692362 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.692431 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.692450 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.692500 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.692520 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:32Z","lastTransitionTime":"2026-02-24T09:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.795515 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.795565 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.795578 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.795604 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.795620 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:32Z","lastTransitionTime":"2026-02-24T09:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.877277 4822 generic.go:334] "Generic (PLEG): container finished" podID="f22e7eb7-5eca-40b1-b7b8-6683604024ba" containerID="eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28" exitCode=0 Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.877374 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" event={"ID":"f22e7eb7-5eca-40b1-b7b8-6683604024ba","Type":"ContainerDied","Data":"eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28"} Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.893340 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" event={"ID":"72f416e6-5647-4b65-b06f-df73aca5e594","Type":"ContainerStarted","Data":"3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2"} Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.898700 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.899610 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.899648 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.899683 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.899706 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:32Z","lastTransitionTime":"2026-02-24T09:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.909049 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72cfa68249a7e55b890cfa5284ef4c5e10dae317eeb6be34edc9d45803723e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:32Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.935198 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:32Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.952365 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:32Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.971206 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714c471f8ec139e9610d9c3e7942cc91fc7c3e09d6b9d83a9498acc804542540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d72e2a80f6b15e7b1f86206da1682dfae29bb1bf1dcca20ca38556ba3fc2c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:32Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:32 crc kubenswrapper[4822]: I0224 09:09:32.986130 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7acc17a2035f017ceaa15e39eb12606e38a5902fdc87c6671ac2628ebaa4a4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:32Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.004734 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.004793 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.004815 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.004848 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.004869 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:33Z","lastTransitionTime":"2026-02-24T09:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.008617 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2f6d18-d7b6-4c05-a90c-e6bf83a58862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:09:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:09:01.996048 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:09:01.996167 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:09:01.997001 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3601362715/tls.crt::/tmp/serving-cert-3601362715/tls.key\\\\\\\"\\\\nI0224 09:09:02.405618 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:09:02.409894 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:09:02.409954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:09:02.410013 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:09:02.410028 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:09:02.418769 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:09:02.418799 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:09:02.418814 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418828 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418838 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:09:02.418846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:09:02.418853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:09:02.418860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:09:02.423036 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:33Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.030550 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888ca80a64aca876c966570ab2bca27d2f1d4e80fe42443a170382671bf5a002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a3802aceae54e33e3c19d855be2bd17cdb18cc4ff7110425685551e2fbff21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:33Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.051270 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:33Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.063908 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22174b11b108be065985e9809bc93e49dcb17e237f5054e2e5771b82e7e42f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:33Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.084612 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96124054c109e534a338814cf39c5948da05e8ec3dd7e62985d440380310c97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:33Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.107272 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.107335 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.107354 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.107380 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.107398 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:33Z","lastTransitionTime":"2026-02-24T09:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.118397 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:33Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.142706 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:33Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.162975 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:33Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.178238 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4525470935b533581c4bf2d12d8cc18425b8fc363f2b501ab7773882832f112d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:33Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.193713 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://576c14462c61f59afdca39fedf2d6ddd7b7c3a779afcc6edbeab01c2fad8c6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d090a61be59e6ddbda21197c89b0a80c4dc7ef91c5cb3e3c589c5d21b7018bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:33Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.210467 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.210517 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.210529 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.210547 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.210558 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:33Z","lastTransitionTime":"2026-02-24T09:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.313385 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.313440 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.313457 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.313483 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.313499 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:33Z","lastTransitionTime":"2026-02-24T09:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.337219 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.337299 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.337335 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:09:33 crc kubenswrapper[4822]: E0224 09:09:33.337497 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:09:33 crc kubenswrapper[4822]: E0224 09:09:33.337588 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:09:33 crc kubenswrapper[4822]: E0224 09:09:33.337690 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.337891 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:09:33 crc kubenswrapper[4822]: E0224 09:09:33.338387 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.416823 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.416871 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.416883 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.416936 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.416951 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:33Z","lastTransitionTime":"2026-02-24T09:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.519001 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.519084 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.519104 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.519133 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.519157 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:33Z","lastTransitionTime":"2026-02-24T09:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.622502 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.622563 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.622580 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.622608 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.622627 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:33Z","lastTransitionTime":"2026-02-24T09:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.726195 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.726277 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.726303 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.726333 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.726356 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:33Z","lastTransitionTime":"2026-02-24T09:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.829767 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.830088 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.830191 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.830286 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.830375 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:33Z","lastTransitionTime":"2026-02-24T09:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.901105 4822 generic.go:334] "Generic (PLEG): container finished" podID="f22e7eb7-5eca-40b1-b7b8-6683604024ba" containerID="48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537" exitCode=0 Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.901200 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" event={"ID":"f22e7eb7-5eca-40b1-b7b8-6683604024ba","Type":"ContainerDied","Data":"48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537"} Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.924638 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2f6d18-d7b6-4c05-a90c-e6bf83a58862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:09:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:09:01.996048 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:09:01.996167 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:09:01.997001 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3601362715/tls.crt::/tmp/serving-cert-3601362715/tls.key\\\\\\\"\\\\nI0224 09:09:02.405618 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:09:02.409894 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:09:02.409954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:09:02.410013 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:09:02.410028 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:09:02.418769 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:09:02.418799 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:09:02.418814 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418828 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418838 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:09:02.418846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:09:02.418853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:09:02.418860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:09:02.423036 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:33Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.932807 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.932845 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.932861 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.932904 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.932950 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:33Z","lastTransitionTime":"2026-02-24T09:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.939093 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888ca80a64aca876c966570ab2bca27d2f1d4e80fe42443a170382671bf5a002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a3802aceae54e33e3c19d855be2bd17cdb18cc4ff7110425685551e2fbff21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:33Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.956713 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:33Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.978070 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22174b11b108be065985e9809bc93e49dcb17e237f5054e2e5771b82e7e42f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:33Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:33 crc kubenswrapper[4822]: I0224 09:09:33.991644 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:33Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.004057 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714c471f8ec139e9610d9c3e7942cc91fc7c3e09d6b9d83a9498acc804542540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d72e2a80f6b15e7b1f86206da1682dfae29bb1bf1dcca20ca38556ba3fc2c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:34Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.013171 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7acc17a2035f017ceaa15e39eb12606e38a5902fdc87c6671ac2628ebaa4a4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:34Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.029373 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96124054c109e534a338814cf39c5948da05e8ec3dd7e62985d440380310c97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:34Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.035621 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.035644 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.035654 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.035671 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.035682 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:34Z","lastTransitionTime":"2026-02-24T09:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.043121 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:34Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.053591 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4525470935b533581c4bf2d12d8cc18425b8fc363f2b501ab7773882832f112d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:34Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.069121 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://576c14462c61f59afdca39fedf2d6ddd7b7c3a779afcc6edbeab01c2fad8c6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d090a61be59e6ddbda21197c89b0a80c4dc7ef91c5cb3e3c589c5d21b7018bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:34Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.087844 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:34Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.106425 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:34Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.121868 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72cfa68249a7e55b890cfa5284ef4c5e10dae317eeb6be34edc9d45803723e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:34Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.137873 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.137906 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.137932 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.137946 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.137955 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:34Z","lastTransitionTime":"2026-02-24T09:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.141354 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:34Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.240100 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.240136 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.240146 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.240163 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.240177 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:34Z","lastTransitionTime":"2026-02-24T09:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.343187 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.343217 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.343226 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.343241 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.343253 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:34Z","lastTransitionTime":"2026-02-24T09:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.446568 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.446619 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.446641 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.446671 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.446694 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:34Z","lastTransitionTime":"2026-02-24T09:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.550117 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.550180 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.550196 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.550221 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.550238 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:34Z","lastTransitionTime":"2026-02-24T09:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.654077 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.654128 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.654144 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.654167 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.654183 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:34Z","lastTransitionTime":"2026-02-24T09:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.757418 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.757488 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.757510 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.757556 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.757581 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:34Z","lastTransitionTime":"2026-02-24T09:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.860903 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.860993 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.861010 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.861037 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.861057 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:34Z","lastTransitionTime":"2026-02-24T09:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.909212 4822 generic.go:334] "Generic (PLEG): container finished" podID="f22e7eb7-5eca-40b1-b7b8-6683604024ba" containerID="e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51" exitCode=0 Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.909267 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" event={"ID":"f22e7eb7-5eca-40b1-b7b8-6683604024ba","Type":"ContainerDied","Data":"e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51"} Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.924564 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" event={"ID":"72f416e6-5647-4b65-b06f-df73aca5e594","Type":"ContainerStarted","Data":"de433343a91cf7d72347bd092263f6d49d781f35c42e3e375f026e1ea1080ffb"} Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.925042 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.925218 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.925380 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.943572 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:34Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.967619 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:34Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.967998 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.968211 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.968305 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.968341 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.968428 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:34Z","lastTransitionTime":"2026-02-24T09:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.971211 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.973948 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:34 crc kubenswrapper[4822]: I0224 09:09:34.989246 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4525470935b533581c4bf2d12d8cc18425b8fc363f2b501ab7773882832f112d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:34Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.011161 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://576c14462c61f59afdca39fedf2d6ddd7b7c3a779afcc6edbeab01c2fad8c6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d090a61be59e6ddbda21197c89b0a80c4dc7ef91c5cb3e3c589c5d21b7018bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.046306 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.070444 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72cfa68249a7e55b890cfa5284ef4c5e10dae317eeb6be34edc9d45803723e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.073051 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.073114 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.073134 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.073160 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.073179 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:35Z","lastTransitionTime":"2026-02-24T09:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.088219 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.101846 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714c471f8ec139e9610d9c3e7942cc91fc7c3e09d6b9d83a9498acc804542540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d72e2a80f6b15e7b1f86206da1682dfae29bb1bf1dcca20ca38556ba3fc2c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.116092 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7acc17a2035f017ceaa15e39eb12606e38a5902fdc87c6671ac2628ebaa4a4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.136448 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2f6d18-d7b6-4c05-a90c-e6bf83a58862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:09:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:09:01.996048 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:09:01.996167 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:09:01.997001 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3601362715/tls.crt::/tmp/serving-cert-3601362715/tls.key\\\\\\\"\\\\nI0224 09:09:02.405618 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:09:02.409894 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:09:02.409954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:09:02.410013 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:09:02.410028 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:09:02.418769 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:09:02.418799 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:09:02.418814 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418828 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418838 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:09:02.418846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:09:02.418853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:09:02.418860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:09:02.423036 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.157667 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888ca80a64aca876c966570ab2bca27d2f1d4e80fe42443a170382671bf5a002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a3802aceae54e33e3c19d855be2bd17cdb18cc4ff7110425685551e2fbff21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.172737 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.180144 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.180189 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.180209 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.180234 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.180251 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:35Z","lastTransitionTime":"2026-02-24T09:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.185600 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22174b11b108be065985e9809bc93e49dcb17e237f5054e2e5771b82e7e42f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.199012 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.216074 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96124054c109e534a338814cf39c5948da05e8ec3dd7e62985d440380310c97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.234546 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4525470935b533581c4bf2d12d8cc18425b8fc363f2b501ab7773882832f112d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.244991 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://576c14462c61f59afdca39fedf2d6ddd7b7c3a779afcc6edbeab01c2fad8c6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d090a61be59e6ddbda21197c89b0a80c4dc7ef91c5cb3e3c589c5d21b7018bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.265517 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de433343a91cf7d72347bd092263f6d49d781f35c42e3e375f026e1ea1080ffb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.280837 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.283954 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.283983 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.283993 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.284010 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.284022 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:35Z","lastTransitionTime":"2026-02-24T09:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.294277 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.307777 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72cfa68249a7e55b890cfa5284ef4c5e10dae317eeb6be34edc9d45803723e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.320036 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.332359 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.336892 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:09:35 crc kubenswrapper[4822]: E0224 09:09:35.337009 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.337138 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.337213 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:09:35 crc kubenswrapper[4822]: E0224 09:09:35.337285 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:09:35 crc kubenswrapper[4822]: E0224 09:09:35.337366 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.337479 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:09:35 crc kubenswrapper[4822]: E0224 09:09:35.337579 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.343337 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22174b11b108be065985e9809bc93e49dcb17e237f5054e2e5771b82e7e42f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.358815 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.371464 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714c471f8ec139e9610d9c3e7942cc91fc7c3e09d6b9d83a9498acc804542540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d72e2a80f6b15e7b1f86206da1682dfae29bb1bf1dcca20ca38556ba3fc2c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.383512 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7acc17a2035f017ceaa15e39eb12606e38a5902fdc87c6671ac2628ebaa4a4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.386145 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.386288 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.386407 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.386538 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.386671 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:35Z","lastTransitionTime":"2026-02-24T09:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.397875 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2f6d18-d7b6-4c05-a90c-e6bf83a58862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:09:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:09:01.996048 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:09:01.996167 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:09:01.997001 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3601362715/tls.crt::/tmp/serving-cert-3601362715/tls.key\\\\\\\"\\\\nI0224 09:09:02.405618 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:09:02.409894 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:09:02.409954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:09:02.410013 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:09:02.410028 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:09:02.418769 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:09:02.418799 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:09:02.418814 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418828 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418838 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:09:02.418846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:09:02.418853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:09:02.418860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:09:02.423036 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.413744 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888ca80a64aca876c966570ab2bca27d2f1d4e80fe42443a170382671bf5a002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a3802aceae54e33e3c19d855be2bd17cdb18cc4ff7110425685551e2fbff21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.428515 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96124054c109e534a338814cf39c5948da05e8ec3dd7e62985d440380310c97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.490352 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.490402 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.490410 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.490426 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.490438 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:35Z","lastTransitionTime":"2026-02-24T09:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.593337 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.593405 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.593423 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.593453 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.593474 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:35Z","lastTransitionTime":"2026-02-24T09:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.695893 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.696225 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.696436 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.696638 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.696827 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:35Z","lastTransitionTime":"2026-02-24T09:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.800070 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.800128 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.800146 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.800172 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.800188 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:35Z","lastTransitionTime":"2026-02-24T09:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.902731 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.902792 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.902810 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.902834 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.902854 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:35Z","lastTransitionTime":"2026-02-24T09:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.936868 4822 generic.go:334] "Generic (PLEG): container finished" podID="f22e7eb7-5eca-40b1-b7b8-6683604024ba" containerID="e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97" exitCode=0 Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.937111 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" event={"ID":"f22e7eb7-5eca-40b1-b7b8-6683604024ba","Type":"ContainerDied","Data":"e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97"} Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.972513 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72cfa68249a7e55b890cfa5284ef4c5e10dae317eeb6be34edc9d45803723e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:35 crc kubenswrapper[4822]: I0224 09:09:35.992966 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.005500 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.005547 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.005556 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.005572 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.005581 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:36Z","lastTransitionTime":"2026-02-24T09:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.017331 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.051783 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714c471f8ec139e9610d9c3e7942cc91fc7c3e09d6b9d83a9498acc804542540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d72e2a80f6b15e7b1f86206da1682dfae29bb1bf1dcca20ca38556ba3fc2c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.070679 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7acc17a2035f017ceaa15e39eb12606e38a5902fdc87c6671ac2628ebaa4a4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.089625 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2f6d18-d7b6-4c05-a90c-e6bf83a58862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:09:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:09:01.996048 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:09:01.996167 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:09:01.997001 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3601362715/tls.crt::/tmp/serving-cert-3601362715/tls.key\\\\\\\"\\\\nI0224 09:09:02.405618 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:09:02.409894 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:09:02.409954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:09:02.410013 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:09:02.410028 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:09:02.418769 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:09:02.418799 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:09:02.418814 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418828 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418838 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:09:02.418846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:09:02.418853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:09:02.418860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:09:02.423036 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.102223 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888ca80a64aca876c966570ab2bca27d2f1d4e80fe42443a170382671bf5a002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a3802aceae54e33e3c19d855be2bd17cdb18cc4ff7110425685551e2fbff21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.107891 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.107955 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.107970 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.107987 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.107999 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:36Z","lastTransitionTime":"2026-02-24T09:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.126612 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.138361 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22174b11b108be065985e9809bc93e49dcb17e237f5054e2e5771b82e7e42f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.155484 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96124054c109e534a338814cf39c5948da05e8ec3dd7e62985d440380310c97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.179402 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de433343a91cf7d72347bd092263f6d49d781f35c42e3e375f026e1ea1080ffb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.194260 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.210684 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.210718 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.210728 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.210745 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.210756 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:36Z","lastTransitionTime":"2026-02-24T09:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.216724 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.227378 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4525470935b533581c4bf2d12d8cc18425b8fc363f2b501ab7773882832f112d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.237586 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://576c14462c61f59afdca39fedf2d6ddd7b7c3a779afcc6edbeab01c2fad8c6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d090a61be59e6ddbda21197c89b0a80c4dc7ef91c5cb3e3c589c5d21b7018bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.313634 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.313698 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.313719 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.313749 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.313768 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:36Z","lastTransitionTime":"2026-02-24T09:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.416291 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.416347 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.416361 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.416382 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.416395 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:36Z","lastTransitionTime":"2026-02-24T09:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.519070 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.519117 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.519132 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.519151 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.519164 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:36Z","lastTransitionTime":"2026-02-24T09:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.621804 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.621842 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.621851 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.621867 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.621877 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:36Z","lastTransitionTime":"2026-02-24T09:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.724944 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.724995 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.725014 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.725038 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.725056 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:36Z","lastTransitionTime":"2026-02-24T09:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.827571 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.827621 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.827634 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.827653 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.827665 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:36Z","lastTransitionTime":"2026-02-24T09:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.930377 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.930447 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.930467 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.930492 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.930509 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:36Z","lastTransitionTime":"2026-02-24T09:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.946634 4822 generic.go:334] "Generic (PLEG): container finished" podID="f22e7eb7-5eca-40b1-b7b8-6683604024ba" containerID="41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6" exitCode=0 Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.946740 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" event={"ID":"f22e7eb7-5eca-40b1-b7b8-6683604024ba","Type":"ContainerDied","Data":"41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6"} Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.968745 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72cfa68249a7e55b890cfa5284ef4c5e10dae317eeb6be34edc9d45803723e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:36 crc kubenswrapper[4822]: I0224 09:09:36.987000 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.001873 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7acc17a2035f017ceaa15e39eb12606e38a5902fdc87c6671ac2628ebaa4a4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.019072 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2f6d18-d7b6-4c05-a90c-e6bf83a58862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:09:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:09:01.996048 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:09:01.996167 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:09:01.997001 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3601362715/tls.crt::/tmp/serving-cert-3601362715/tls.key\\\\\\\"\\\\nI0224 09:09:02.405618 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:09:02.409894 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:09:02.409954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:09:02.410013 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:09:02.410028 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:09:02.418769 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:09:02.418799 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:09:02.418814 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418828 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418838 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:09:02.418846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:09:02.418853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:09:02.418860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:09:02.423036 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:37Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.032850 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.032939 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.032959 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.032985 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.033003 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:37Z","lastTransitionTime":"2026-02-24T09:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.035780 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888ca80a64aca876c966570ab2bca27d2f1d4e80fe42443a170382671bf5a002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a3802aceae54e33e3c19d855be2bd17cdb18cc4ff7110425685551e2fbff21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:37Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.051182 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:37Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.067539 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22174b11b108be065985e9809bc93e49dcb17e237f5054e2e5771b82e7e42f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:37Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.080273 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:37Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.092843 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714c471f8ec139e9610d9c3e7942cc91fc7c3e09d6b9d83a9498acc804542540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d72e2a80f6b15e7b1f86206da1682dfae29bb1bf1dcca20ca38556ba3fc2c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:37Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.109563 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96124054c109e534a338814cf39c5948da05e8ec3dd7e62985d440380310c97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:37Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.126398 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:37Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.136678 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.136804 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.136819 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.136834 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.136844 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:37Z","lastTransitionTime":"2026-02-24T09:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.140671 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4525470935b533581c4bf2d12d8cc18425b8fc363f2b501ab7773882832f112d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:37Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.154249 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://576c14462c61f59afdca39fedf2d6ddd7b7c3a779afcc6edbeab01c2fad8c6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d090a61be59e6ddbda21197c89b0a80c4dc7ef91c5cb3e3c589c5d21b7018bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:37Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.183189 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de433343a91cf7d72347bd092263f6d49d781f35c42e3e375f026e1ea1080ffb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:37Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.199950 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:37Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.239485 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.239532 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.239546 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.239567 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.239583 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:37Z","lastTransitionTime":"2026-02-24T09:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.337217 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.337291 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.337326 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.337331 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:09:37 crc kubenswrapper[4822]: E0224 09:09:37.337467 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:09:37 crc kubenswrapper[4822]: E0224 09:09:37.338105 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:09:37 crc kubenswrapper[4822]: E0224 09:09:37.338176 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:09:37 crc kubenswrapper[4822]: E0224 09:09:37.338271 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.344462 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.344506 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.344521 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.344541 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.344556 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:37Z","lastTransitionTime":"2026-02-24T09:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.448105 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.448164 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.448180 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.448458 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.448479 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:37Z","lastTransitionTime":"2026-02-24T09:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.551615 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.551652 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.551663 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.551678 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.551690 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:37Z","lastTransitionTime":"2026-02-24T09:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.654431 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.654485 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.654503 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.654533 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.654569 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:37Z","lastTransitionTime":"2026-02-24T09:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.759252 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.759315 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.759337 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.759368 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.759388 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:37Z","lastTransitionTime":"2026-02-24T09:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.863209 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.863305 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.863369 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.863402 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.863427 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:37Z","lastTransitionTime":"2026-02-24T09:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.952855 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-669bp_72f416e6-5647-4b65-b06f-df73aca5e594/ovnkube-controller/0.log" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.956423 4822 generic.go:334] "Generic (PLEG): container finished" podID="72f416e6-5647-4b65-b06f-df73aca5e594" containerID="de433343a91cf7d72347bd092263f6d49d781f35c42e3e375f026e1ea1080ffb" exitCode=1 Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.956529 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" event={"ID":"72f416e6-5647-4b65-b06f-df73aca5e594","Type":"ContainerDied","Data":"de433343a91cf7d72347bd092263f6d49d781f35c42e3e375f026e1ea1080ffb"} Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.957661 4822 scope.go:117] "RemoveContainer" containerID="de433343a91cf7d72347bd092263f6d49d781f35c42e3e375f026e1ea1080ffb" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.963844 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" event={"ID":"f22e7eb7-5eca-40b1-b7b8-6683604024ba","Type":"ContainerStarted","Data":"1151d10473d8bce326113fb334f2863e8577b504973f59405b67008b2cd06581"} Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.965862 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.965899 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.965912 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.965924 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.965934 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:37Z","lastTransitionTime":"2026-02-24T09:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:37 crc kubenswrapper[4822]: I0224 09:09:37.986213 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:37Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.006591 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4525470935b533581c4bf2d12d8cc18425b8fc363f2b501ab7773882832f112d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.026126 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://576c14462c61f59afdca39fedf2d6ddd7b7c3a779afcc6edbeab01c2fad8c6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d090a61be59e6ddbda21197c89b0a80c4dc7ef91c5cb3e3c589c5d21b7018bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.060353 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de433343a91cf7d72347bd092263f6d49d781f35c42e3e375f026e1ea1080ffb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de433343a91cf7d72347bd092263f6d49d781f35c42e3e375f026e1ea1080ffb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"message\\\":\\\"lector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0224 09:09:36.845492 6408 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:36.845946 6408 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:36.846343 6408 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:36.846482 6408 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:36.847269 6408 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:36.847428 6408 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:36.848098 6408 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0224 09:09:36.848121 6408 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0224 09:09:36.848150 6408 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 09:09:36.848176 6408 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 09:09:36.849947 6408 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.069246 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.069308 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.069328 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.069360 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.069407 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:38Z","lastTransitionTime":"2026-02-24T09:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.085893 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.107493 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72cfa68249a7e55b890cfa5284ef4c5e10dae317eeb6be34edc9d45803723e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.126764 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.151315 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2f6d18-d7b6-4c05-a90c-e6bf83a58862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:09:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:09:01.996048 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:09:01.996167 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:09:01.997001 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3601362715/tls.crt::/tmp/serving-cert-3601362715/tls.key\\\\\\\"\\\\nI0224 09:09:02.405618 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:09:02.409894 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:09:02.409954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:09:02.410013 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:09:02.410028 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:09:02.418769 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:09:02.418799 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:09:02.418814 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418828 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418838 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:09:02.418846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:09:02.418853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:09:02.418860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:09:02.423036 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.172756 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.172807 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.172825 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.172849 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.172867 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:38Z","lastTransitionTime":"2026-02-24T09:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.175662 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888ca80a64aca876c966570ab2bca27d2f1d4e80fe42443a170382671bf5a002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a3802aceae54e33e3c19d855be2bd17cdb18cc4ff7110425685551e2fbff21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.197020 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.211764 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22174b11b108be065985e9809bc93e49dcb17e237f5054e2e5771b82e7e42f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.225276 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.237485 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714c471f8ec139e9610d9c3e7942cc91fc7c3e09d6b9d83a9498acc804542540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d72e2a80f6b15e7b1f86206da1682dfae29bb1bf1dcca20ca38556ba3fc2c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.250334 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7acc17a2035f017ceaa15e39eb12606e38a5902fdc87c6671ac2628ebaa4a4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.269996 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96124054c109e534a338814cf39c5948da05e8ec3dd7e62985d440380310c97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.274978 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.275010 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.275020 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.275037 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.275047 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:38Z","lastTransitionTime":"2026-02-24T09:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.284966 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72cfa68249a7e55b890cfa5284ef4c5e10dae317eeb6be34edc9d45803723e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.302847 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.320220 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888ca80a64aca876c966570ab2bca27d2f1d4e80fe42443a170382671bf5a002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a3802aceae54e33e3c19d855be2bd17cdb18cc4ff7110425685551e2fbff21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.336953 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.349842 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22174b11b108be065985e9809bc93e49dcb17e237f5054e2e5771b82e7e42f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.361162 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.376040 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714c471f8ec139e9610d9c3e7942cc91fc7c3e09d6b9d83a9498acc804542540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d72e2a80f6b15e7b1f86206da1682dfae29bb1bf1dcca20ca38556ba3fc2c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.378453 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.378475 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.378483 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.378499 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.378508 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:38Z","lastTransitionTime":"2026-02-24T09:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.393612 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7acc17a2035f017ceaa15e39eb12606e38a5902fdc87c6671ac2628ebaa4a4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.414971 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2f6d18-d7b6-4c05-a90c-e6bf83a58862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:09:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:09:01.996048 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:09:01.996167 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:09:01.997001 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3601362715/tls.crt::/tmp/serving-cert-3601362715/tls.key\\\\\\\"\\\\nI0224 09:09:02.405618 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:09:02.409894 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:09:02.409954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:09:02.410013 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:09:02.410028 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:09:02.418769 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:09:02.418799 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:09:02.418814 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418828 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418838 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:09:02.418846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:09:02.418853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:09:02.418860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:09:02.423036 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.435944 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96124054c109e534a338814cf39c5948da05e8ec3dd7e62985d440380310c97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.457504 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.471390 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4525470935b533581c4bf2d12d8cc18425b8fc363f2b501ab7773882832f112d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.481119 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.481352 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.481471 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.481651 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.481767 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:38Z","lastTransitionTime":"2026-02-24T09:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.492734 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://576c14462c61f59afdca39fedf2d6ddd7b7c3a779afcc6edbeab01c2fad8c6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d090a61be59e6ddbda21197c89b0a80c4dc7ef91c5cb3e3c589c5d21b7018bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.527565 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de433343a91cf7d72347bd092263f6d49d781f35c42e3e375f026e1ea1080ffb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de433343a91cf7d72347bd092263f6d49d781f35c42e3e375f026e1ea1080ffb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"message\\\":\\\"lector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0224 09:09:36.845492 6408 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:36.845946 6408 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:36.846343 6408 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:36.846482 6408 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:36.847269 6408 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:36.847428 6408 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:36.848098 6408 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0224 09:09:36.848121 6408 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0224 09:09:36.848150 6408 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 09:09:36.848176 6408 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 09:09:36.849947 6408 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.559944 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1151d10473d8bce326113fb334f2863e8577b504973f59405b67008b2cd06581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.581673 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888ca80a64aca876c966570ab2bca27d2f1d4e80fe42443a170382671bf5a002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a3802aceae54e33e3c19d855be2bd17cdb18cc4ff7110425685551e2fbff21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.584185 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.584304 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.584360 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.584422 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.584477 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:38Z","lastTransitionTime":"2026-02-24T09:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.602313 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.623357 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22174b11b108be065985e9809bc93e49dcb17e237f5054e2e5771b82e7e42f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.651217 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.671299 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714c471f8ec139e9610d9c3e7942cc91fc7c3e09d6b9d83a9498acc804542540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d72e2a80f6b15e7b1f86206da1682dfae29bb1bf1dcca20ca38556ba3fc2c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.681713 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7acc17a2035f017ceaa15e39eb12606e38a5902fdc87c6671ac2628ebaa4a4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.686170 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.686332 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.686398 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.686456 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.686608 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:38Z","lastTransitionTime":"2026-02-24T09:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.695763 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2f6d18-d7b6-4c05-a90c-e6bf83a58862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:09:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:09:01.996048 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:09:01.996167 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:09:01.997001 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3601362715/tls.crt::/tmp/serving-cert-3601362715/tls.key\\\\\\\"\\\\nI0224 09:09:02.405618 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:09:02.409894 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:09:02.409954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:09:02.410013 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:09:02.410028 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:09:02.418769 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:09:02.418799 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:09:02.418814 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418828 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418838 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:09:02.418846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:09:02.418853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:09:02.418860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:09:02.423036 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.708270 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96124054c109e534a338814cf39c5948da05e8ec3dd7e62985d440380310c97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.720960 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.733663 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4525470935b533581c4bf2d12d8cc18425b8fc363f2b501ab7773882832f112d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.744566 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://576c14462c61f59afdca39fedf2d6ddd7b7c3a779afcc6edbeab01c2fad8c6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d090a61be59e6ddbda21197c89b0a80c4dc7ef91c5cb3e3c589c5d21b7018bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.763940 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de433343a91cf7d72347bd092263f6d49d781f35c42e3e375f026e1ea1080ffb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de433343a91cf7d72347bd092263f6d49d781f35c42e3e375f026e1ea1080ffb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"message\\\":\\\"lector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0224 09:09:36.845492 6408 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:36.845946 6408 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:36.846343 6408 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:36.846482 6408 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:36.847269 6408 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:36.847428 6408 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:36.848098 6408 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0224 09:09:36.848121 6408 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0224 09:09:36.848150 6408 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 09:09:36.848176 6408 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 09:09:36.849947 6408 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.781622 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1151d10473d8bce326113fb334f2863e8577b504973f59405b67008b2cd06581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.789534 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.789598 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.789617 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.789641 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.789657 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:38Z","lastTransitionTime":"2026-02-24T09:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.794186 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.807606 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72cfa68249a7e55b890cfa5284ef4c5e10dae317eeb6be34edc9d45803723e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.892755 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.892944 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.893145 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.893320 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.893433 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:38Z","lastTransitionTime":"2026-02-24T09:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.970555 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-669bp_72f416e6-5647-4b65-b06f-df73aca5e594/ovnkube-controller/0.log" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.974809 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" event={"ID":"72f416e6-5647-4b65-b06f-df73aca5e594","Type":"ContainerStarted","Data":"ab37974e73b5031681843648a5e0979aaeb9f56415c35892f133a9de3e4ec5ed"} Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.975562 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.996901 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.997168 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.997427 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.997615 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.997799 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:38Z","lastTransitionTime":"2026-02-24T09:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:38 crc kubenswrapper[4822]: I0224 09:09:38.997319 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1151d10473d8bce326113fb334f2863e8577b504973f59405b67008b2cd06581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.012738 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:39Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.031446 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4525470935b533581c4bf2d12d8cc18425b8fc363f2b501ab7773882832f112d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:39Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.047802 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://576c14462c61f59afdca39fedf2d6ddd7b7c3a779afcc6edbeab01c2fad8c6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d090a61be59e6ddbda21197c89b0a80c4dc7ef91c5cb3e3c589c5d21b7018bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:39Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.076267 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab37974e73b5031681843648a5e0979aaeb9f56415c35892f133a9de3e4ec5ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de433343a91cf7d72347bd092263f6d49d781f35c42e3e375f026e1ea1080ffb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"message\\\":\\\"lector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0224 09:09:36.845492 6408 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:36.845946 6408 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:36.846343 6408 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:36.846482 6408 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:36.847269 6408 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:36.847428 6408 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:36.848098 6408 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0224 09:09:36.848121 6408 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0224 09:09:36.848150 6408 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 09:09:36.848176 6408 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 09:09:36.849947 6408 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:39Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.095320 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72cfa68249a7e55b890cfa5284ef4c5e10dae317eeb6be34edc9d45803723e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:39Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.099947 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.100004 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.100021 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.100048 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.100068 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:39Z","lastTransitionTime":"2026-02-24T09:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.112246 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:39Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.129453 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714c471f8ec139e9610d9c3e7942cc91fc7c3e09d6b9d83a9498acc804542540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d72e2a80f6b15e7b1f86206da1682dfae29bb1bf1dcca20ca38556ba3fc2c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:39Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.140948 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7acc17a2035f017ceaa15e39eb12606e38a5902fdc87c6671ac2628ebaa4a4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:39Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.154656 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2f6d18-d7b6-4c05-a90c-e6bf83a58862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:09:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:09:01.996048 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:09:01.996167 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:09:01.997001 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3601362715/tls.crt::/tmp/serving-cert-3601362715/tls.key\\\\\\\"\\\\nI0224 09:09:02.405618 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:09:02.409894 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:09:02.409954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:09:02.410013 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:09:02.410028 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:09:02.418769 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:09:02.418799 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:09:02.418814 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418828 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418838 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:09:02.418846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:09:02.418853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:09:02.418860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:09:02.423036 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:39Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.174603 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888ca80a64aca876c966570ab2bca27d2f1d4e80fe42443a170382671bf5a002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a3802aceae54e33e3c19d855be2bd17cdb18cc4ff7110425685551e2fbff21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:39Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.193960 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:39Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.203586 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.203641 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.203659 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.203688 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.203705 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:39Z","lastTransitionTime":"2026-02-24T09:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.211719 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22174b11b108be065985e9809bc93e49dcb17e237f5054e2e5771b82e7e42f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:39Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.228427 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:39Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.254500 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96124054c109e534a338814cf39c5948da05e8ec3dd7e62985d440380310c97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:39Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.306261 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.306311 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.306323 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.306341 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.306354 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:39Z","lastTransitionTime":"2026-02-24T09:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.336797 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:09:39 crc kubenswrapper[4822]: E0224 09:09:39.337041 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.337101 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.337101 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:09:39 crc kubenswrapper[4822]: E0224 09:09:39.337313 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:09:39 crc kubenswrapper[4822]: E0224 09:09:39.337395 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.337115 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:09:39 crc kubenswrapper[4822]: E0224 09:09:39.337516 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.408583 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.408641 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.408658 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.408683 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.408700 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:39Z","lastTransitionTime":"2026-02-24T09:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.511753 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.511812 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.511829 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.511853 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.511870 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:39Z","lastTransitionTime":"2026-02-24T09:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.614852 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.614960 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.614979 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.615004 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.615021 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:39Z","lastTransitionTime":"2026-02-24T09:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.718399 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.718464 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.718480 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.718504 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.718523 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:39Z","lastTransitionTime":"2026-02-24T09:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.821573 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.821629 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.821645 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.821669 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.821686 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:39Z","lastTransitionTime":"2026-02-24T09:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.924619 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.924682 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.924702 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.924728 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.924746 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:39Z","lastTransitionTime":"2026-02-24T09:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.983092 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-669bp_72f416e6-5647-4b65-b06f-df73aca5e594/ovnkube-controller/1.log" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.985067 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-669bp_72f416e6-5647-4b65-b06f-df73aca5e594/ovnkube-controller/0.log" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.989187 4822 generic.go:334] "Generic (PLEG): container finished" podID="72f416e6-5647-4b65-b06f-df73aca5e594" containerID="ab37974e73b5031681843648a5e0979aaeb9f56415c35892f133a9de3e4ec5ed" exitCode=1 Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.989240 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" event={"ID":"72f416e6-5647-4b65-b06f-df73aca5e594","Type":"ContainerDied","Data":"ab37974e73b5031681843648a5e0979aaeb9f56415c35892f133a9de3e4ec5ed"} Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.989297 4822 scope.go:117] "RemoveContainer" containerID="de433343a91cf7d72347bd092263f6d49d781f35c42e3e375f026e1ea1080ffb" Feb 24 09:09:39 crc kubenswrapper[4822]: I0224 09:09:39.990359 4822 scope.go:117] "RemoveContainer" containerID="ab37974e73b5031681843648a5e0979aaeb9f56415c35892f133a9de3e4ec5ed" Feb 24 09:09:39 crc kubenswrapper[4822]: E0224 09:09:39.990692 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-669bp_openshift-ovn-kubernetes(72f416e6-5647-4b65-b06f-df73aca5e594)\"" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.016360 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:40Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.028209 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.028264 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.028282 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.028306 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.028322 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:40Z","lastTransitionTime":"2026-02-24T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.037915 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4525470935b533581c4bf2d12d8cc18425b8fc363f2b501ab7773882832f112d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:40Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.056081 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://576c14462c61f59afdca39fedf2d6ddd7b7c3a779afcc6edbeab01c2fad8c6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d090a61be59e6ddbda21197c89b0a80c4dc7ef91c5cb3e3c589c5d21b7018bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:40Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.065509 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.065554 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.065569 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.065593 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.065611 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:40Z","lastTransitionTime":"2026-02-24T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:40 crc kubenswrapper[4822]: E0224 09:09:40.086743 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:40Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.091493 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.091574 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.091593 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.091616 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.091633 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:40Z","lastTransitionTime":"2026-02-24T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.091479 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab37974e73b5031681843648a5e0979aaeb9f56415c35892f133a9de3e4ec5ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de433343a91cf7d72347bd092263f6d49d781f35c42e3e375f026e1ea1080ffb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"message\\\":\\\"lector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0224 09:09:36.845492 6408 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:36.845946 6408 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:36.846343 6408 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:36.846482 6408 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:36.847269 6408 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:36.847428 6408 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:36.848098 6408 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0224 09:09:36.848121 6408 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0224 09:09:36.848150 6408 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 09:09:36.848176 6408 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 09:09:36.849947 6408 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab37974e73b5031681843648a5e0979aaeb9f56415c35892f133a9de3e4ec5ed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:09:39Z\\\",\\\"message\\\":\\\":39.208520 6656 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:39.208645 6656 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:39.208680 6656 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:39.208718 6656 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:39.208751 6656 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:39.208791 6656 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:39.220373 6656 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 09:09:39.220427 6656 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0224 09:09:39.220477 6656 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0224 09:09:39.220514 6656 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 09:09:39.220546 6656 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 09:09:39.220593 6656 factory.go:656] Stopping watch factory\\\\nI0224 09:09:39.220631 6656 ovnkube.go:599] Stopped ovnkube\\\\nI0224 09:09:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:40Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:40 crc kubenswrapper[4822]: E0224 09:09:40.110876 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:40Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.115350 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1151d10473d8bce326113fb334f2863e8577b504973f59405b67008b2cd06581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:40Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.116732 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.116805 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.116832 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.116879 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.116902 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:40Z","lastTransitionTime":"2026-02-24T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.134636 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:40Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:40 crc kubenswrapper[4822]: E0224 09:09:40.139129 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:40Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.146034 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.146280 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.146429 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.146581 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.146736 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:40Z","lastTransitionTime":"2026-02-24T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.156599 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72cfa68249a7e55b890cfa5284ef4c5e10dae317eeb6be34edc9d45803723e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:40Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:40 crc kubenswrapper[4822]: E0224 09:09:40.165512 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:40Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.169696 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.169743 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.169759 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.169784 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.169803 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:40Z","lastTransitionTime":"2026-02-24T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.177803 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888ca80a64aca876c966570ab2bca27d2f1d4e80fe42443a170382671bf5a002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a3802aceae54e33e3c19d855be2bd17cdb18cc4ff7110425685551e2fbff21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:40Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:40 crc kubenswrapper[4822]: E0224 09:09:40.188971 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:40Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:40 crc kubenswrapper[4822]: E0224 09:09:40.189205 4822 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.191646 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.191691 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.191709 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.191732 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.191749 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:40Z","lastTransitionTime":"2026-02-24T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.199143 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:40Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.217246 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22174b11b108be065985e9809bc93e49dcb17e237f5054e2e5771b82e7e42f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:40Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.235392 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:40Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.254048 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714c471f8ec139e9610d9c3e7942cc91fc7c3e09d6b9d83a9498acc804542540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d72e2a80f6b15e7b1f86206da1682dfae29bb1bf1dcca20ca38556ba3fc2c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:40Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.269568 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7acc17a2035f017ceaa15e39eb12606e38a5902fdc87c6671ac2628ebaa4a4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:40Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.293353 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2f6d18-d7b6-4c05-a90c-e6bf83a58862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:09:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:09:01.996048 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:09:01.996167 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:09:01.997001 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3601362715/tls.crt::/tmp/serving-cert-3601362715/tls.key\\\\\\\"\\\\nI0224 09:09:02.405618 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:09:02.409894 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:09:02.409954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:09:02.410013 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:09:02.410028 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:09:02.418769 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:09:02.418799 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:09:02.418814 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418828 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418838 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:09:02.418846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:09:02.418853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:09:02.418860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:09:02.423036 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:40Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.296263 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.296366 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.296386 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.296411 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.296428 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:40Z","lastTransitionTime":"2026-02-24T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.316950 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96124054c109e534a338814cf39c5948da05e8ec3dd7e62985d440380310c97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:40Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.399544 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.399598 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.399615 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.399639 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.399655 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:40Z","lastTransitionTime":"2026-02-24T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.503533 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.503602 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.503655 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.503681 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.503701 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:40Z","lastTransitionTime":"2026-02-24T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.607236 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.607323 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.607347 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.607379 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.607402 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:40Z","lastTransitionTime":"2026-02-24T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.709853 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.709962 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.709986 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.710014 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.710040 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:40Z","lastTransitionTime":"2026-02-24T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.813872 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.813937 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.813984 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.814009 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.814029 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:40Z","lastTransitionTime":"2026-02-24T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.917659 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.917717 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.917735 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.917792 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.917810 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:40Z","lastTransitionTime":"2026-02-24T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:40 crc kubenswrapper[4822]: I0224 09:09:40.995506 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-669bp_72f416e6-5647-4b65-b06f-df73aca5e594/ovnkube-controller/1.log" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.002483 4822 scope.go:117] "RemoveContainer" containerID="ab37974e73b5031681843648a5e0979aaeb9f56415c35892f133a9de3e4ec5ed" Feb 24 09:09:41 crc kubenswrapper[4822]: E0224 09:09:41.002813 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-669bp_openshift-ovn-kubernetes(72f416e6-5647-4b65-b06f-df73aca5e594)\"" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.021178 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.021242 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.021268 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.021296 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.021318 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:41Z","lastTransitionTime":"2026-02-24T09:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.023084 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:41Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.042549 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4525470935b533581c4bf2d12d8cc18425b8fc363f2b501ab7773882832f112d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:41Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.061731 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://576c14462c61f59afdca39fedf2d6ddd7b7c3a779afcc6edbeab01c2fad8c6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d090a61be59e6ddbda21197c89b0a80c4dc7ef91c5cb3e3c589c5d21b7018bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:41Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.093865 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab37974e73b5031681843648a5e0979aaeb9f56415c35892f133a9de3e4ec5ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab37974e73b5031681843648a5e0979aaeb9f56415c35892f133a9de3e4ec5ed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:09:39Z\\\",\\\"message\\\":\\\":39.208520 6656 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:39.208645 6656 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:39.208680 6656 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:39.208718 6656 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:39.208751 6656 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:39.208791 6656 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:39.220373 6656 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 09:09:39.220427 6656 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0224 09:09:39.220477 6656 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0224 09:09:39.220514 6656 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 09:09:39.220546 6656 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 09:09:39.220593 6656 factory.go:656] Stopping watch factory\\\\nI0224 09:09:39.220631 6656 ovnkube.go:599] Stopped ovnkube\\\\nI0224 09:09:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-669bp_openshift-ovn-kubernetes(72f416e6-5647-4b65-b06f-df73aca5e594)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:41Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.116730 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1151d10473d8bce326113fb334f2863e8577b504973f59405b67008b2cd06581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:41Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.124255 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.124306 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.124324 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.124348 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.124365 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:41Z","lastTransitionTime":"2026-02-24T09:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.133729 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:41Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.154614 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72cfa68249a7e55b890cfa5284ef4c5e10dae317eeb6be34edc9d45803723e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:41Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.174654 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888ca80a64aca876c966570ab2bca27d2f1d4e80fe42443a170382671bf5a002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a3802aceae54e33e3c19d855be2bd17cdb18cc4ff7110425685551e2fbff21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:41Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.193435 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:41Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.210062 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22174b11b108be065985e9809bc93e49dcb17e237f5054e2e5771b82e7e42f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:41Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.225573 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:41Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.228792 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.228900 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.228949 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.228976 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.228994 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:41Z","lastTransitionTime":"2026-02-24T09:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.242900 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714c471f8ec139e9610d9c3e7942cc91fc7c3e09d6b9d83a9498acc804542540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d72e2a80f6b15e7b1f86206da1682dfae29bb1bf1dcca20ca38556ba3fc2c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:41Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.258844 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7acc17a2035f017ceaa15e39eb12606e38a5902fdc87c6671ac2628ebaa4a4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:41Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.280978 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2f6d18-d7b6-4c05-a90c-e6bf83a58862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:09:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:09:01.996048 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:09:01.996167 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:09:01.997001 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3601362715/tls.crt::/tmp/serving-cert-3601362715/tls.key\\\\\\\"\\\\nI0224 09:09:02.405618 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:09:02.409894 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:09:02.409954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:09:02.410013 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:09:02.410028 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:09:02.418769 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:09:02.418799 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:09:02.418814 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418828 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418838 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:09:02.418846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:09:02.418853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:09:02.418860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:09:02.423036 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:41Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.301264 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96124054c109e534a338814cf39c5948da05e8ec3dd7e62985d440380310c97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:41Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.332695 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.332765 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.332790 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.332823 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.332846 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:41Z","lastTransitionTime":"2026-02-24T09:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.336291 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.336472 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.336528 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.336541 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:09:41 crc kubenswrapper[4822]: E0224 09:09:41.336533 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:09:41 crc kubenswrapper[4822]: E0224 09:09:41.336671 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:09:41 crc kubenswrapper[4822]: E0224 09:09:41.336783 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:09:41 crc kubenswrapper[4822]: E0224 09:09:41.336888 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.337756 4822 scope.go:117] "RemoveContainer" containerID="e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344" Feb 24 09:09:41 crc kubenswrapper[4822]: E0224 09:09:41.338041 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.436202 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.436261 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.436279 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.436304 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.436322 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:41Z","lastTransitionTime":"2026-02-24T09:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.539678 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.539748 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.539768 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.539793 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.539810 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:41Z","lastTransitionTime":"2026-02-24T09:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.642644 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.642773 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.642798 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.642832 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.642856 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:41Z","lastTransitionTime":"2026-02-24T09:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.746396 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.746464 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.746481 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.746506 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.746523 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:41Z","lastTransitionTime":"2026-02-24T09:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.849217 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.849281 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.849297 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.849321 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.849338 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:41Z","lastTransitionTime":"2026-02-24T09:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.952313 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.952385 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.952402 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.952427 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:41 crc kubenswrapper[4822]: I0224 09:09:41.952445 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:41Z","lastTransitionTime":"2026-02-24T09:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.055416 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.055496 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.055518 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.055548 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.055566 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:42Z","lastTransitionTime":"2026-02-24T09:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.163067 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.163450 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.163578 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.163711 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.163840 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:42Z","lastTransitionTime":"2026-02-24T09:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.267037 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.267377 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.267506 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.267627 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.267756 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:42Z","lastTransitionTime":"2026-02-24T09:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.371101 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.371420 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.371600 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.371727 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.371853 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:42Z","lastTransitionTime":"2026-02-24T09:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.475388 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.475476 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.475492 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.475518 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.475537 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:42Z","lastTransitionTime":"2026-02-24T09:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.577941 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.577995 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.578012 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.578034 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.578051 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:42Z","lastTransitionTime":"2026-02-24T09:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.681367 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.681436 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.681454 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.681483 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.681503 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:42Z","lastTransitionTime":"2026-02-24T09:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.785073 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.785466 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.785760 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.786047 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.786215 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:42Z","lastTransitionTime":"2026-02-24T09:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.889150 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.889232 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.889256 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.889288 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.889312 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:42Z","lastTransitionTime":"2026-02-24T09:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.992229 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.992301 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.992327 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.992360 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:42 crc kubenswrapper[4822]: I0224 09:09:42.992386 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:42Z","lastTransitionTime":"2026-02-24T09:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.095739 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.095813 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.095832 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.095866 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.095895 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:43Z","lastTransitionTime":"2026-02-24T09:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.199556 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.199623 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.199643 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.199672 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.199692 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:43Z","lastTransitionTime":"2026-02-24T09:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.302548 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.302589 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.302605 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.302629 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.302645 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:43Z","lastTransitionTime":"2026-02-24T09:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.336483 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.336546 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.336535 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.336496 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:09:43 crc kubenswrapper[4822]: E0224 09:09:43.336724 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:09:43 crc kubenswrapper[4822]: E0224 09:09:43.336880 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:09:43 crc kubenswrapper[4822]: E0224 09:09:43.337009 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:09:43 crc kubenswrapper[4822]: E0224 09:09:43.337148 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.405325 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.405394 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.405419 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.405450 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.405471 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:43Z","lastTransitionTime":"2026-02-24T09:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.508083 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.508159 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.508174 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.508194 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.508208 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:43Z","lastTransitionTime":"2026-02-24T09:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.611844 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.611899 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.611922 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.611989 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.612009 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:43Z","lastTransitionTime":"2026-02-24T09:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.715439 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.715488 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.715501 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.715521 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.715533 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:43Z","lastTransitionTime":"2026-02-24T09:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.818945 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.819220 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.819445 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.819577 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.819707 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:43Z","lastTransitionTime":"2026-02-24T09:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.922983 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.923342 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.923494 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.923681 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:43 crc kubenswrapper[4822]: I0224 09:09:43.923814 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:43Z","lastTransitionTime":"2026-02-24T09:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.027166 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.027782 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.027795 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.027816 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.027825 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:44Z","lastTransitionTime":"2026-02-24T09:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.130855 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.130973 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.130992 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.131019 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.131037 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:44Z","lastTransitionTime":"2026-02-24T09:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.233600 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.233656 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.233673 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.233697 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.233715 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:44Z","lastTransitionTime":"2026-02-24T09:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.336096 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.336138 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.336174 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.336195 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.336211 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:44Z","lastTransitionTime":"2026-02-24T09:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.439293 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.439369 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.439393 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.439422 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.439450 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:44Z","lastTransitionTime":"2026-02-24T09:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.543124 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.543184 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.543201 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.543226 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.543243 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:44Z","lastTransitionTime":"2026-02-24T09:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.646034 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.646122 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.646146 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.646183 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.646209 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:44Z","lastTransitionTime":"2026-02-24T09:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.749308 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.749386 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.749404 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.749429 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.749447 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:44Z","lastTransitionTime":"2026-02-24T09:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.855359 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.855431 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.855486 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.855627 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.855649 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:44Z","lastTransitionTime":"2026-02-24T09:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.959408 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.959460 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.959482 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.959511 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:44 crc kubenswrapper[4822]: I0224 09:09:44.959551 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:44Z","lastTransitionTime":"2026-02-24T09:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.062129 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.062192 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.062211 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.062237 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.062254 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:45Z","lastTransitionTime":"2026-02-24T09:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.165342 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.165406 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.165422 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.165448 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.165469 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:45Z","lastTransitionTime":"2026-02-24T09:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.269225 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.269292 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.269309 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.269335 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.269353 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:45Z","lastTransitionTime":"2026-02-24T09:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.336738 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.336761 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.336775 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.336885 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:09:45 crc kubenswrapper[4822]: E0224 09:09:45.337074 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:09:45 crc kubenswrapper[4822]: E0224 09:09:45.337218 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:09:45 crc kubenswrapper[4822]: E0224 09:09:45.337376 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:09:45 crc kubenswrapper[4822]: E0224 09:09:45.337482 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.372299 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.372358 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.372417 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.372449 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.372467 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:45Z","lastTransitionTime":"2026-02-24T09:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.475427 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.475502 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.475522 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.475548 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.475565 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:45Z","lastTransitionTime":"2026-02-24T09:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.578294 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.578359 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.578376 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.578402 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.578422 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:45Z","lastTransitionTime":"2026-02-24T09:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.681046 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.681335 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.681504 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.681696 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.681843 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:45Z","lastTransitionTime":"2026-02-24T09:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.784696 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.785467 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.785609 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.785746 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.785890 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:45Z","lastTransitionTime":"2026-02-24T09:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.889234 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.889285 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.889304 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.889329 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.889348 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:45Z","lastTransitionTime":"2026-02-24T09:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.992433 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.992495 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.992512 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.992538 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:45 crc kubenswrapper[4822]: I0224 09:09:45.992730 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:45Z","lastTransitionTime":"2026-02-24T09:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.096003 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.096316 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.096453 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.096590 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.096708 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:46Z","lastTransitionTime":"2026-02-24T09:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.200139 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.200206 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.200225 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.200255 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.200272 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:46Z","lastTransitionTime":"2026-02-24T09:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.303496 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.303564 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.303583 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.303608 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.303624 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:46Z","lastTransitionTime":"2026-02-24T09:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.358853 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.406605 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.406664 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.406680 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.406703 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.406720 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:46Z","lastTransitionTime":"2026-02-24T09:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.509755 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.509948 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.509984 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.510014 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.510051 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:46Z","lastTransitionTime":"2026-02-24T09:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.614044 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.614106 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.614123 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.614148 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.614168 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:46Z","lastTransitionTime":"2026-02-24T09:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.716828 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.716889 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.716907 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.716967 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.716986 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:46Z","lastTransitionTime":"2026-02-24T09:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.819757 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.820112 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.820212 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.820286 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.820353 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:46Z","lastTransitionTime":"2026-02-24T09:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.927466 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.927551 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.927579 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.927610 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:46 crc kubenswrapper[4822]: I0224 09:09:46.927644 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:46Z","lastTransitionTime":"2026-02-24T09:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.030683 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.030973 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.031069 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.031165 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.031255 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:47Z","lastTransitionTime":"2026-02-24T09:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.134531 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.134589 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.134609 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.134634 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.134652 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:47Z","lastTransitionTime":"2026-02-24T09:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.236837 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.236870 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.236879 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.236891 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.236901 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:47Z","lastTransitionTime":"2026-02-24T09:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.255201 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.255411 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.255480 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:09:47 crc kubenswrapper[4822]: E0224 09:09:47.255505 4822 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:09:47 crc kubenswrapper[4822]: E0224 09:09:47.255524 4822 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.255527 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:09:47 crc kubenswrapper[4822]: E0224 09:09:47.255534 4822 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:09:47 crc kubenswrapper[4822]: E0224 09:09:47.255562 4822 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:09:47 crc kubenswrapper[4822]: E0224 09:09:47.255555 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:10:19.255529464 +0000 UTC m=+141.643292012 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:09:47 crc kubenswrapper[4822]: E0224 09:09:47.255626 4822 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:09:47 crc kubenswrapper[4822]: E0224 09:09:47.255705 4822 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:09:47 crc kubenswrapper[4822]: E0224 09:09:47.255719 4822 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.255740 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:09:47 crc kubenswrapper[4822]: E0224 09:09:47.255757 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 09:10:19.25575034 +0000 UTC m=+141.643512878 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:09:47 crc kubenswrapper[4822]: E0224 09:09:47.255803 4822 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:09:47 crc kubenswrapper[4822]: E0224 09:09:47.255846 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:10:19.255822261 +0000 UTC m=+141.643584849 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:09:47 crc kubenswrapper[4822]: E0224 09:09:47.255884 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 09:10:19.255857892 +0000 UTC m=+141.643620470 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:09:47 crc kubenswrapper[4822]: E0224 09:09:47.255949 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:10:19.255903594 +0000 UTC m=+141.643666172 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.336546 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.336667 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:09:47 crc kubenswrapper[4822]: E0224 09:09:47.336881 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.336974 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.336969 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:09:47 crc kubenswrapper[4822]: E0224 09:09:47.337089 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:09:47 crc kubenswrapper[4822]: E0224 09:09:47.337247 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:09:47 crc kubenswrapper[4822]: E0224 09:09:47.337395 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.340367 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.340445 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.340472 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.340499 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.340520 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:47Z","lastTransitionTime":"2026-02-24T09:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.353309 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.443385 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.443733 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.444662 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.444723 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.444743 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:47Z","lastTransitionTime":"2026-02-24T09:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.457424 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f51aff12-328f-4b79-8dbb-2079510f45dc-metrics-certs\") pod \"network-metrics-daemon-htbq4\" (UID: \"f51aff12-328f-4b79-8dbb-2079510f45dc\") " pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:09:47 crc kubenswrapper[4822]: E0224 09:09:47.457630 4822 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:09:47 crc kubenswrapper[4822]: E0224 09:09:47.457750 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f51aff12-328f-4b79-8dbb-2079510f45dc-metrics-certs podName:f51aff12-328f-4b79-8dbb-2079510f45dc nodeName:}" failed. No retries permitted until 2026-02-24 09:10:19.45772066 +0000 UTC m=+141.845483248 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f51aff12-328f-4b79-8dbb-2079510f45dc-metrics-certs") pod "network-metrics-daemon-htbq4" (UID: "f51aff12-328f-4b79-8dbb-2079510f45dc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.548332 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.548404 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.548422 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.548446 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.548464 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:47Z","lastTransitionTime":"2026-02-24T09:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.651629 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.651674 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.651687 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.651705 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.651718 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:47Z","lastTransitionTime":"2026-02-24T09:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.754952 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.754983 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.755010 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.755023 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.755032 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:47Z","lastTransitionTime":"2026-02-24T09:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.858338 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.858409 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.858433 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.858461 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.858484 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:47Z","lastTransitionTime":"2026-02-24T09:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.961627 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.961684 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.961702 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.961729 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:47 crc kubenswrapper[4822]: I0224 09:09:47.961746 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:47Z","lastTransitionTime":"2026-02-24T09:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.064054 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.064121 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.064137 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.064162 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.064179 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:48Z","lastTransitionTime":"2026-02-24T09:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.167668 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.167741 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.167763 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.167790 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.167808 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:48Z","lastTransitionTime":"2026-02-24T09:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.270709 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.270771 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.270788 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.270812 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.270829 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:48Z","lastTransitionTime":"2026-02-24T09:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.359026 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:48Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.373570 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.373805 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.373829 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.373852 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.373870 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:48Z","lastTransitionTime":"2026-02-24T09:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.380369 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4525470935b533581c4bf2d12d8cc18425b8fc363f2b501ab7773882832f112d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:48Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.409651 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://576c14462c61f59afdca39fedf2d6ddd7b7c3a779afcc6edbeab01c2fad8c6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d090a61be59e6ddbda21197c89b0a80c4dc7ef91c5cb3e3c589c5d21b7018bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:48Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.464523 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab37974e73b5031681843648a5e0979aaeb9f56415c35892f133a9de3e4ec5ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab37974e73b5031681843648a5e0979aaeb9f56415c35892f133a9de3e4ec5ed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:09:39Z\\\",\\\"message\\\":\\\":39.208520 6656 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:39.208645 6656 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:39.208680 6656 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:39.208718 6656 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:39.208751 6656 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:39.208791 6656 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:39.220373 6656 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 09:09:39.220427 6656 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0224 09:09:39.220477 6656 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0224 09:09:39.220514 6656 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 09:09:39.220546 6656 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 09:09:39.220593 6656 factory.go:656] Stopping watch factory\\\\nI0224 09:09:39.220631 6656 ovnkube.go:599] Stopped ovnkube\\\\nI0224 09:09:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-669bp_openshift-ovn-kubernetes(72f416e6-5647-4b65-b06f-df73aca5e594)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:48Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.476510 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.476544 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.476555 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.476573 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.476585 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:48Z","lastTransitionTime":"2026-02-24T09:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.481018 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1151d10473d8bce326113fb334f2863e8577b504973f59405b67008b2cd06581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:48Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.513380 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba152bf-73f0-4fbd-9814-864ca1f28e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f0ebb59005449668c1b492bdc6155afb2b414ca3abca8e05e2b43aa59d23886\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9078ca880a9ab26a4423effb527cfd1cb2c5288403daf02462f1286ddac0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba71fec71e1dff40dd70be718d077ad208ca4aa83726e186ddfee661bcd468a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4764701e648ba7fc6285bf69cac5297af5a21b2e2b3904237e3afe0fd049b1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6240de59556b2c3fd8e26937945a53c5f216dce135f0b0024c1fd8ba8b1fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bf9901f8a5e1a52726573e43d614a77c1305b59426fead5d4f15e27c3ef06d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3bf9901f8a5e1a52726573e43d614a77c1305b59426fead5d4f15e27c3ef06d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b9168effe46ebec3888f923da1ee54e392258872f44ffabc71f5ec52ef4d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6b9168effe46ebec3888f923da1ee54e392258872f44ffabc71f5ec52ef4d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f0d6ba901fd909408637d654debaf0ccb14392ed6d2fa294e8ffb443f7a9ef80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d6ba901fd909408637d654debaf0ccb14392ed6d2fa294e8ffb443f7a9ef80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:48Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.530046 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b20f634d-42fc-410d-bd88-f8a5217a5100\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049e050ef9f1991307128e875cc6a4e29240a5e7c14c7e35686cef5ea40075d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://161dc5372c480c4c6401e2a366da2472613d6e1b70e07ed5d2b13e605149b71b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:08:24Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 09:08:00.651277 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 09:08:00.655420 1 observer_polling.go:159] Starting file observer\\\\nI0224 09:08:00.692273 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 09:08:00.698616 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 09:08:24.530014 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 09:08:24.530093 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:24Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd8556c3d55934657eb33e924c817164ec4c4ee3daf720283bd5d5a5eb849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5119b48d4016d083d02c6ccdb47e89a0d6b90e9907e9d21c0a37236be086453\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1dc0ba5c502740227408242b13ac68bf465d12f051a771295d69c245bc2b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:48Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.544531 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:48Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.562896 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72cfa68249a7e55b890cfa5284ef4c5e10dae317eeb6be34edc9d45803723e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:48Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.576171 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888ca80a64aca876c966570ab2bca27d2f1d4e80fe42443a170382671bf5a002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a3802aceae54e33e3c19d855be2bd17cdb18cc4ff7110425685551e2fbff21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:48Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.581033 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.581088 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.581100 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.581117 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.581130 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:48Z","lastTransitionTime":"2026-02-24T09:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.591468 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:48Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.600782 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22174b11b108be065985e9809bc93e49dcb17e237f5054e2e5771b82e7e42f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:48Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.613159 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:48Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.631112 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714c471f8ec139e9610d9c3e7942cc91fc7c3e09d6b9d83a9498acc804542540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d72e2a80f6b15e7b1f86206da1682dfae29bb1bf1dcca20ca38556ba3fc2c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:48Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.643270 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7acc17a2035f017ceaa15e39eb12606e38a5902fdc87c6671ac2628ebaa4a4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:48Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.663654 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2f6d18-d7b6-4c05-a90c-e6bf83a58862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:09:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:09:01.996048 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:09:01.996167 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:09:01.997001 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3601362715/tls.crt::/tmp/serving-cert-3601362715/tls.key\\\\\\\"\\\\nI0224 09:09:02.405618 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:09:02.409894 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:09:02.409954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:09:02.410013 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:09:02.410028 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:09:02.418769 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:09:02.418799 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:09:02.418814 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418828 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418838 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:09:02.418846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:09:02.418853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:09:02.418860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:09:02.423036 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:48Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.683499 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.683527 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.683536 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.683548 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.683557 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:48Z","lastTransitionTime":"2026-02-24T09:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.683708 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96124054c109e534a338814cf39c5948da05e8ec3dd7e62985d440380310c97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:48Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.786737 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.786790 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.786805 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.786831 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.786847 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:48Z","lastTransitionTime":"2026-02-24T09:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.889858 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.890254 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.890410 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.890559 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.890704 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:48Z","lastTransitionTime":"2026-02-24T09:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.993671 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.993742 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.993766 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.993800 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:48 crc kubenswrapper[4822]: I0224 09:09:48.993861 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:48Z","lastTransitionTime":"2026-02-24T09:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.097451 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.097524 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.097549 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.097575 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.097592 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:49Z","lastTransitionTime":"2026-02-24T09:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.205544 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.205623 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.205642 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.205671 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.205688 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:49Z","lastTransitionTime":"2026-02-24T09:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.308481 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.308543 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.308560 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.308586 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.308602 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:49Z","lastTransitionTime":"2026-02-24T09:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.336485 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.336505 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.336572 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:09:49 crc kubenswrapper[4822]: E0224 09:09:49.336642 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.336664 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:09:49 crc kubenswrapper[4822]: E0224 09:09:49.336789 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:09:49 crc kubenswrapper[4822]: E0224 09:09:49.336944 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:09:49 crc kubenswrapper[4822]: E0224 09:09:49.337074 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.412315 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.412379 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.412399 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.412425 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.412442 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:49Z","lastTransitionTime":"2026-02-24T09:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.515895 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.516023 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.516046 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.516076 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.516101 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:49Z","lastTransitionTime":"2026-02-24T09:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.620738 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.620816 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.620839 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.620874 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.620897 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:49Z","lastTransitionTime":"2026-02-24T09:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.723481 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.723555 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.723580 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.723616 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.723639 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:49Z","lastTransitionTime":"2026-02-24T09:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.827209 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.827265 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.827280 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.827300 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.827321 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:49Z","lastTransitionTime":"2026-02-24T09:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.930481 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.930591 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.930611 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.930636 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:49 crc kubenswrapper[4822]: I0224 09:09:49.930653 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:49Z","lastTransitionTime":"2026-02-24T09:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.034663 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.034760 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.034775 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.034803 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.034821 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:50Z","lastTransitionTime":"2026-02-24T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.138118 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.138184 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.138202 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.138227 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.138246 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:50Z","lastTransitionTime":"2026-02-24T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.241833 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.241890 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.241954 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.242002 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.242027 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:50Z","lastTransitionTime":"2026-02-24T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.243551 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.243589 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.243605 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.243625 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.243641 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:50Z","lastTransitionTime":"2026-02-24T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:50 crc kubenswrapper[4822]: E0224 09:09:50.265300 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:50Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.272110 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.272161 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.272179 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.272202 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.272218 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:50Z","lastTransitionTime":"2026-02-24T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:50 crc kubenswrapper[4822]: E0224 09:09:50.292189 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:50Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.297571 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.297643 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.297665 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.297696 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.297721 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:50Z","lastTransitionTime":"2026-02-24T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:50 crc kubenswrapper[4822]: E0224 09:09:50.318745 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:50Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.324582 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.324809 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.325010 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.325204 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.325444 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:50Z","lastTransitionTime":"2026-02-24T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:50 crc kubenswrapper[4822]: E0224 09:09:50.346386 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:50Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.352357 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.352429 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.352454 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.352485 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.352508 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:50Z","lastTransitionTime":"2026-02-24T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:50 crc kubenswrapper[4822]: E0224 09:09:50.373602 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:50Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:50 crc kubenswrapper[4822]: E0224 09:09:50.374010 4822 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.376325 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.376389 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.376408 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.376439 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.376466 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:50Z","lastTransitionTime":"2026-02-24T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.480493 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.480870 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.481074 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.481228 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.481380 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:50Z","lastTransitionTime":"2026-02-24T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.585526 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.586065 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.586275 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.586477 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.586668 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:50Z","lastTransitionTime":"2026-02-24T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.690441 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.690813 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.691035 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.691213 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.691368 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:50Z","lastTransitionTime":"2026-02-24T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.795314 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.795378 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.795395 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.795421 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.795439 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:50Z","lastTransitionTime":"2026-02-24T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.899055 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.899147 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.899169 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.899204 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:50 crc kubenswrapper[4822]: I0224 09:09:50.899230 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:50Z","lastTransitionTime":"2026-02-24T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.003196 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.003263 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.003282 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.003315 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.003344 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:51Z","lastTransitionTime":"2026-02-24T09:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.106607 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.106664 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.106685 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.106711 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.106730 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:51Z","lastTransitionTime":"2026-02-24T09:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.209527 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.209600 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.209621 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.209645 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.209663 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:51Z","lastTransitionTime":"2026-02-24T09:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.313099 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.313443 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.313571 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.313702 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.313837 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:51Z","lastTransitionTime":"2026-02-24T09:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.336698 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.336799 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:09:51 crc kubenswrapper[4822]: E0224 09:09:51.336903 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.337009 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:09:51 crc kubenswrapper[4822]: E0224 09:09:51.337151 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:09:51 crc kubenswrapper[4822]: E0224 09:09:51.337326 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.336734 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:09:51 crc kubenswrapper[4822]: E0224 09:09:51.338303 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.416440 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.416494 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.416505 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.416523 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.416534 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:51Z","lastTransitionTime":"2026-02-24T09:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.519705 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.519778 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.519805 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.519835 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.519858 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:51Z","lastTransitionTime":"2026-02-24T09:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.622797 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.622860 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.622879 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.622904 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.622954 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:51Z","lastTransitionTime":"2026-02-24T09:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.725451 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.725502 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.725522 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.725545 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.725562 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:51Z","lastTransitionTime":"2026-02-24T09:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.829034 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.829100 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.829119 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.829146 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.829166 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:51Z","lastTransitionTime":"2026-02-24T09:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.931452 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.931571 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.931601 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.931634 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:51 crc kubenswrapper[4822]: I0224 09:09:51.931656 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:51Z","lastTransitionTime":"2026-02-24T09:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.034669 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.034724 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.034741 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.034764 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.034783 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:52Z","lastTransitionTime":"2026-02-24T09:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.137615 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.137689 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.137706 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.137731 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.137750 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:52Z","lastTransitionTime":"2026-02-24T09:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.240942 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.240999 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.241018 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.241045 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.241063 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:52Z","lastTransitionTime":"2026-02-24T09:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.343201 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.343254 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.343265 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.343284 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.343297 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:52Z","lastTransitionTime":"2026-02-24T09:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.446167 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.446231 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.446249 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.446272 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.446288 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:52Z","lastTransitionTime":"2026-02-24T09:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.549309 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.549373 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.549392 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.549415 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.549433 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:52Z","lastTransitionTime":"2026-02-24T09:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.651984 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.652046 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.652062 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.652087 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.652106 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:52Z","lastTransitionTime":"2026-02-24T09:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.756354 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.756427 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.756444 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.756471 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.756488 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:52Z","lastTransitionTime":"2026-02-24T09:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.858297 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.858676 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.858849 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.859060 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.859218 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:52Z","lastTransitionTime":"2026-02-24T09:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.962446 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.962509 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.962524 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.962548 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:52 crc kubenswrapper[4822]: I0224 09:09:52.962569 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:52Z","lastTransitionTime":"2026-02-24T09:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.064803 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.064884 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.064966 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.065004 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.065025 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:53Z","lastTransitionTime":"2026-02-24T09:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.168282 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.168348 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.168364 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.168390 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.168406 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:53Z","lastTransitionTime":"2026-02-24T09:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.271535 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.271585 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.271608 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.271634 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.271652 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:53Z","lastTransitionTime":"2026-02-24T09:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.337235 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.337235 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.337276 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.337282 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:09:53 crc kubenswrapper[4822]: E0224 09:09:53.337884 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:09:53 crc kubenswrapper[4822]: E0224 09:09:53.338070 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:09:53 crc kubenswrapper[4822]: E0224 09:09:53.338177 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:09:53 crc kubenswrapper[4822]: E0224 09:09:53.338314 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.374380 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.374427 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.374449 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.374472 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.374488 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:53Z","lastTransitionTime":"2026-02-24T09:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.477245 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.477356 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.477375 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.477398 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.477416 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:53Z","lastTransitionTime":"2026-02-24T09:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.580248 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.580326 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.580354 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.580383 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.580406 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:53Z","lastTransitionTime":"2026-02-24T09:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.683368 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.683461 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.683483 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.683516 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.683540 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:53Z","lastTransitionTime":"2026-02-24T09:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.786896 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.787006 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.787032 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.787066 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.787085 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:53Z","lastTransitionTime":"2026-02-24T09:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.889593 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.889660 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.889678 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.889704 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.889722 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:53Z","lastTransitionTime":"2026-02-24T09:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.992805 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.992864 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.992880 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.992902 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:53 crc kubenswrapper[4822]: I0224 09:09:53.992979 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:53Z","lastTransitionTime":"2026-02-24T09:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.096373 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.096433 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.096449 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.096473 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.096491 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:54Z","lastTransitionTime":"2026-02-24T09:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.200220 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.200340 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.200357 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.200422 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.200440 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:54Z","lastTransitionTime":"2026-02-24T09:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.302999 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.303102 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.303118 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.303144 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.303160 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:54Z","lastTransitionTime":"2026-02-24T09:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.406546 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.406598 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.406618 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.406665 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.406694 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:54Z","lastTransitionTime":"2026-02-24T09:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.510080 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.510147 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.510163 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.510189 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.510207 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:54Z","lastTransitionTime":"2026-02-24T09:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.613417 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.613495 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.613512 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.613538 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.613556 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:54Z","lastTransitionTime":"2026-02-24T09:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.716867 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.716985 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.717009 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.717039 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.717059 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:54Z","lastTransitionTime":"2026-02-24T09:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.820464 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.820542 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.820561 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.820587 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.820604 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:54Z","lastTransitionTime":"2026-02-24T09:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.923833 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.923901 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.923967 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.923992 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:54 crc kubenswrapper[4822]: I0224 09:09:54.924010 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:54Z","lastTransitionTime":"2026-02-24T09:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.027899 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.027998 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.028017 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.028043 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.028062 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:55Z","lastTransitionTime":"2026-02-24T09:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.131106 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.131220 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.131240 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.131266 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.131283 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:55Z","lastTransitionTime":"2026-02-24T09:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.233974 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.234045 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.234063 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.234092 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.234110 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:55Z","lastTransitionTime":"2026-02-24T09:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.336234 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.336300 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:09:55 crc kubenswrapper[4822]: E0224 09:09:55.336330 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:09:55 crc kubenswrapper[4822]: E0224 09:09:55.336422 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.336530 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:09:55 crc kubenswrapper[4822]: E0224 09:09:55.336572 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.337055 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:09:55 crc kubenswrapper[4822]: E0224 09:09:55.337252 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.337374 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.337454 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.337472 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.337494 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.337510 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:55Z","lastTransitionTime":"2026-02-24T09:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.337537 4822 scope.go:117] "RemoveContainer" containerID="ab37974e73b5031681843648a5e0979aaeb9f56415c35892f133a9de3e4ec5ed" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.338357 4822 scope.go:117] "RemoveContainer" containerID="e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.440048 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.440347 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.440368 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.440395 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.440413 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:55Z","lastTransitionTime":"2026-02-24T09:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.545721 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.545780 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.545799 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.545824 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.545842 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:55Z","lastTransitionTime":"2026-02-24T09:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.648884 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.648957 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.648971 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.648994 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.649006 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:55Z","lastTransitionTime":"2026-02-24T09:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.753284 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.753344 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.753361 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.753389 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.753407 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:55Z","lastTransitionTime":"2026-02-24T09:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.856865 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.856947 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.856966 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.856992 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.857013 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:55Z","lastTransitionTime":"2026-02-24T09:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.959097 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.959152 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.959165 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.959186 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:55 crc kubenswrapper[4822]: I0224 09:09:55.959199 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:55Z","lastTransitionTime":"2026-02-24T09:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.060832 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.061015 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.061056 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.061068 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.061085 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.061097 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:56Z","lastTransitionTime":"2026-02-24T09:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.063626 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"708b7e3ba090c334bf7618665cd0f95d512059d204892851f4005e741b29aedf"} Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.064289 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.066197 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-669bp_72f416e6-5647-4b65-b06f-df73aca5e594/ovnkube-controller/1.log" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.069552 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" event={"ID":"72f416e6-5647-4b65-b06f-df73aca5e594","Type":"ContainerStarted","Data":"421aaf1e60edabf5e78959804ebeaf16249eb1a54982539cf787962cae2e3ad3"} Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.070802 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.084750 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.096174 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22174b11b108be065985e9809bc93e49dcb17e237f5054e2e5771b82e7e42f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.115324 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.127464 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714c471f8ec139e9610d9c3e7942cc91fc7c3e09d6b9d83a9498acc804542540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d72e2a80f6b15e7b1f86206da1682dfae29bb1bf1dcca20ca38556ba3fc2c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.142000 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7acc17a2035f017ceaa15e39eb12606e38a5902fdc87c6671ac2628ebaa4a4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.160056 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2f6d18-d7b6-4c05-a90c-e6bf83a58862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708b7e3ba090c334bf7618665cd0f95d512059d204892851f4005e741b29aedf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:09:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:09:01.996048 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:09:01.996167 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:09:01.997001 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3601362715/tls.crt::/tmp/serving-cert-3601362715/tls.key\\\\\\\"\\\\nI0224 09:09:02.405618 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:09:02.409894 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:09:02.409954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:09:02.410013 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:09:02.410028 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:09:02.418769 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:09:02.418799 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:09:02.418814 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418828 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418838 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:09:02.418846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:09:02.418853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:09:02.418860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:09:02.423036 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.163866 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.163936 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.163952 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.163971 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.163985 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:56Z","lastTransitionTime":"2026-02-24T09:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.178759 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888ca80a64aca876c966570ab2bca27d2f1d4e80fe42443a170382671bf5a002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a3802aceae54e33e3c19d855be2bd17cdb18cc4ff7110425685551e2fbff21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.196598 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96124054c109e534a338814cf39c5948da05e8ec3dd7e62985d440380310c97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.214215 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4525470935b533581c4bf2d12d8cc18425b8fc363f2b501ab7773882832f112d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.231053 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://576c14462c61f59afdca39fedf2d6ddd7b7c3a779afcc6edbeab01c2fad8c6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d090a61be59e6ddbda21197c89b0a80c4dc7ef91c5cb3e3c589c5d21b7018bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.267366 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.267449 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.267467 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.267490 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.267508 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:56Z","lastTransitionTime":"2026-02-24T09:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.269141 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab37974e73b5031681843648a5e0979aaeb9f56415c35892f133a9de3e4ec5ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab37974e73b5031681843648a5e0979aaeb9f56415c35892f133a9de3e4ec5ed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:09:39Z\\\",\\\"message\\\":\\\":39.208520 6656 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:39.208645 6656 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:39.208680 6656 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:39.208718 6656 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:39.208751 6656 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:39.208791 6656 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:39.220373 6656 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 09:09:39.220427 6656 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0224 09:09:39.220477 6656 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0224 09:09:39.220514 6656 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 09:09:39.220546 6656 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 09:09:39.220593 6656 factory.go:656] Stopping watch factory\\\\nI0224 09:09:39.220631 6656 ovnkube.go:599] Stopped ovnkube\\\\nI0224 09:09:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-669bp_openshift-ovn-kubernetes(72f416e6-5647-4b65-b06f-df73aca5e594)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.297536 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1151d10473d8bce326113fb334f2863e8577b504973f59405b67008b2cd06581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.322032 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba152bf-73f0-4fbd-9814-864ca1f28e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f0ebb59005449668c1b492bdc6155afb2b414ca3abca8e05e2b43aa59d23886\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9078ca880a9ab26a4423effb527cfd1cb2c5288403daf02462f1286ddac0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba71fec71e1dff40dd70be718d077ad208ca4aa83726e186ddfee661bcd468a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4764701e648ba7fc6285bf69cac5297af5a21b2e2b3904237e3afe0fd049b1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6240de59556b2c3fd8e26937945a53c5f216dce135f0b0024c1fd8ba8b1fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bf9901f8a5e1a52726573e43d614a77c1305b59426fead5d4f15e27c3ef06d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3bf9901f8a5e1a52726573e43d614a77c1305b59426fead5d4f15e27c3ef06d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b9168effe46ebec3888f923da1ee54e392258872f44ffabc71f5ec52ef4d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6b9168effe46ebec3888f923da1ee54e392258872f44ffabc71f5ec52ef4d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f0d6ba901fd909408637d654debaf0ccb14392ed6d2fa294e8ffb443f7a9ef80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d6ba901fd909408637d654debaf0ccb14392ed6d2fa294e8ffb443f7a9ef80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.340264 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b20f634d-42fc-410d-bd88-f8a5217a5100\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049e050ef9f1991307128e875cc6a4e29240a5e7c14c7e35686cef5ea40075d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://161dc5372c480c4c6401e2a366da2472613d6e1b70e07ed5d2b13e605149b71b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:08:24Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 09:08:00.651277 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 09:08:00.655420 1 observer_polling.go:159] Starting file observer\\\\nI0224 09:08:00.692273 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 09:08:00.698616 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 09:08:24.530014 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 09:08:24.530093 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:24Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd8556c3d55934657eb33e924c817164ec4c4ee3daf720283bd5d5a5eb849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5119b48d4016d083d02c6ccdb47e89a0d6b90e9907e9d21c0a37236be086453\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1dc0ba5c502740227408242b13ac68bf465d12f051a771295d69c245bc2b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.354521 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.369580 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.369622 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.369633 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.369659 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.369670 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:56Z","lastTransitionTime":"2026-02-24T09:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.372535 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72cfa68249a7e55b890cfa5284ef4c5e10dae317eeb6be34edc9d45803723e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.383649 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.394725 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888ca80a64aca876c966570ab2bca27d2f1d4e80fe42443a170382671bf5a002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a3802aceae54e33e3c19d855be2bd17cdb18cc4ff7110425685551e2fbff21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.404988 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.413016 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22174b11b108be065985e9809bc93e49dcb17e237f5054e2e5771b82e7e42f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.426668 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.435397 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714c471f8ec139e9610d9c3e7942cc91fc7c3e09d6b9d83a9498acc804542540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d72e2a80f6b15e7b1f86206da1682dfae29bb1bf1dcca20ca38556ba3fc2c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.444399 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7acc17a2035f017ceaa15e39eb12606e38a5902fdc87c6671ac2628ebaa4a4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.458754 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2f6d18-d7b6-4c05-a90c-e6bf83a58862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708b7e3ba090c334bf7618665cd0f95d512059d204892851f4005e741b29aedf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:09:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:09:01.996048 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:09:01.996167 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:09:01.997001 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3601362715/tls.crt::/tmp/serving-cert-3601362715/tls.key\\\\\\\"\\\\nI0224 09:09:02.405618 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:09:02.409894 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:09:02.409954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:09:02.410013 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:09:02.410028 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:09:02.418769 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:09:02.418799 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:09:02.418814 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418828 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418838 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:09:02.418846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:09:02.418853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:09:02.418860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:09:02.423036 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.470819 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96124054c109e534a338814cf39c5948da05e8ec3dd7e62985d440380310c97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.471954 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.471998 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.472011 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.472038 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.472050 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:56Z","lastTransitionTime":"2026-02-24T09:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.489414 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b20f634d-42fc-410d-bd88-f8a5217a5100\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049e050ef9f1991307128e875cc6a4e29240a5e7c14c7e35686cef5ea40075d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://161dc5372c480c4c6401e2a366da2472613d6e1b70e07ed5d2b13e605149b71b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:08:24Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 09:08:00.651277 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 09:08:00.655420 1 observer_polling.go:159] Starting file observer\\\\nI0224 09:08:00.692273 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 09:08:00.698616 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 09:08:24.530014 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 09:08:24.530093 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:24Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd8556c3d55934657eb33e924c817164ec4c4ee3daf720283bd5d5a5eb849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5119b48d4016d083d02c6ccdb47e89a0d6b90e9907e9d21c0a37236be086453\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1dc0ba5c502740227408242b13ac68bf465d12f051a771295d69c245bc2b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.501765 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.513582 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4525470935b533581c4bf2d12d8cc18425b8fc363f2b501ab7773882832f112d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.525332 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://576c14462c61f59afdca39fedf2d6ddd7b7c3a779afcc6edbeab01c2fad8c6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d090a61be59e6ddbda21197c89b0a80c4dc7ef91c5cb3e3c589c5d21b7018bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.542803 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://421aaf1e60edabf5e78959804ebeaf16249eb1a54982539cf787962cae2e3ad3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab37974e73b5031681843648a5e0979aaeb9f56415c35892f133a9de3e4ec5ed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:09:39Z\\\",\\\"message\\\":\\\":39.208520 6656 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:39.208645 6656 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:39.208680 6656 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:39.208718 6656 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:39.208751 6656 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:39.208791 6656 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:39.220373 6656 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 09:09:39.220427 6656 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0224 09:09:39.220477 6656 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0224 09:09:39.220514 6656 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 09:09:39.220546 6656 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 09:09:39.220593 6656 factory.go:656] Stopping watch factory\\\\nI0224 09:09:39.220631 6656 ovnkube.go:599] Stopped ovnkube\\\\nI0224 09:09:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.560245 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1151d10473d8bce326113fb334f2863e8577b504973f59405b67008b2cd06581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.574326 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.574373 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.574385 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.574401 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.574413 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:56Z","lastTransitionTime":"2026-02-24T09:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.578805 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba152bf-73f0-4fbd-9814-864ca1f28e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f0ebb59005449668c1b492bdc6155afb2b414ca3abca8e05e2b43aa59d23886\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9078ca880a9ab26a4423effb527cfd1cb2c5288403daf02462f1286ddac0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba71fec71e1dff40dd70be718d077ad208ca4aa83726e186ddfee661bcd468a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4764701e648ba7fc6285bf69cac5297af5a21b2e2b3904237e3afe0fd049b1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6240de59556b2c3fd8e26937945a53c5f216dce135f0b0024c1fd8ba8b1fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bf9901f8a5e1a52726573e43d614a77c1305b59426fead5d4f15e27c3ef06d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3bf9901f8a5e1a52726573e43d614a77c1305b59426fead5d4f15e27c3ef06d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b9168effe46ebec3888f923da1ee54e392258872f44ffabc71f5ec52ef4d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6b9168effe46ebec3888f923da1ee54e392258872f44ffabc71f5ec52ef4d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f0d6ba901fd909408637d654debaf0ccb14392ed6d2fa294e8ffb443f7a9ef80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d6ba901fd909408637d654debaf0ccb14392ed6d2fa294e8ffb443f7a9ef80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.598215 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72cfa68249a7e55b890cfa5284ef4c5e10dae317eeb6be34edc9d45803723e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.610171 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.676476 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.676580 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.676597 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.676621 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.676638 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:56Z","lastTransitionTime":"2026-02-24T09:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.779215 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.779267 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.779281 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.779296 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.779307 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:56Z","lastTransitionTime":"2026-02-24T09:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.882195 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.882281 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.882307 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.882336 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.882363 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:56Z","lastTransitionTime":"2026-02-24T09:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.985597 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.985669 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.985694 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.985738 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:56 crc kubenswrapper[4822]: I0224 09:09:56.985760 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:56Z","lastTransitionTime":"2026-02-24T09:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.076092 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-669bp_72f416e6-5647-4b65-b06f-df73aca5e594/ovnkube-controller/2.log" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.077128 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-669bp_72f416e6-5647-4b65-b06f-df73aca5e594/ovnkube-controller/1.log" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.081470 4822 generic.go:334] "Generic (PLEG): container finished" podID="72f416e6-5647-4b65-b06f-df73aca5e594" containerID="421aaf1e60edabf5e78959804ebeaf16249eb1a54982539cf787962cae2e3ad3" exitCode=1 Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.081532 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" event={"ID":"72f416e6-5647-4b65-b06f-df73aca5e594","Type":"ContainerDied","Data":"421aaf1e60edabf5e78959804ebeaf16249eb1a54982539cf787962cae2e3ad3"} Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.081615 4822 scope.go:117] "RemoveContainer" containerID="ab37974e73b5031681843648a5e0979aaeb9f56415c35892f133a9de3e4ec5ed" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.083232 4822 scope.go:117] "RemoveContainer" containerID="421aaf1e60edabf5e78959804ebeaf16249eb1a54982539cf787962cae2e3ad3" Feb 24 09:09:57 crc kubenswrapper[4822]: E0224 09:09:57.083603 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-669bp_openshift-ovn-kubernetes(72f416e6-5647-4b65-b06f-df73aca5e594)\"" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.090191 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.090240 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.090258 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.090283 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.090302 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:57Z","lastTransitionTime":"2026-02-24T09:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.109508 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1151d10473d8bce326113fb334f2863e8577b504973f59405b67008b2cd06581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:57Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.144411 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba152bf-73f0-4fbd-9814-864ca1f28e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f0ebb59005449668c1b492bdc6155afb2b414ca3abca8e05e2b43aa59d23886\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9078ca880a9ab26a4423effb527cfd1cb2c5288403daf02462f1286ddac0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba71fec71e1dff40dd70be718d077ad208ca4aa83726e186ddfee661bcd468a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4764701e648ba7fc6285bf69cac5297af5a21b2e2b3904237e3afe0fd049b1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6240de59556b2c3fd8e26937945a53c5f216dce135f0b0024c1fd8ba8b1fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bf9901f8a5e1a52726573e43d614a77c1305b59426fead5d4f15e27c3ef06d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3bf9901f8a5e1a52726573e43d614a77c1305b59426fead5d4f15e27c3ef06d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b9168effe46ebec3888f923da1ee54e392258872f44ffabc71f5ec52ef4d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6b9168effe46ebec3888f923da1ee54e392258872f44ffabc71f5ec52ef4d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f0d6ba901fd909408637d654debaf0ccb14392ed6d2fa294e8ffb443f7a9ef80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d6ba901fd909408637d654debaf0ccb14392ed6d2fa294e8ffb443f7a9ef80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:57Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.165721 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b20f634d-42fc-410d-bd88-f8a5217a5100\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049e050ef9f1991307128e875cc6a4e29240a5e7c14c7e35686cef5ea40075d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://161dc5372c480c4c6401e2a366da2472613d6e1b70e07ed5d2b13e605149b71b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:08:24Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 09:08:00.651277 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 09:08:00.655420 1 observer_polling.go:159] Starting file observer\\\\nI0224 09:08:00.692273 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 09:08:00.698616 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 09:08:24.530014 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 09:08:24.530093 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:24Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd8556c3d55934657eb33e924c817164ec4c4ee3daf720283bd5d5a5eb849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5119b48d4016d083d02c6ccdb47e89a0d6b90e9907e9d21c0a37236be086453\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1dc0ba5c502740227408242b13ac68bf465d12f051a771295d69c245bc2b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:57Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.186116 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:57Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.192659 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.192717 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.192740 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.192769 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.192793 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:57Z","lastTransitionTime":"2026-02-24T09:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.204986 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4525470935b533581c4bf2d12d8cc18425b8fc363f2b501ab7773882832f112d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:57Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.221725 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://576c14462c61f59afdca39fedf2d6ddd7b7c3a779afcc6edbeab01c2fad8c6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d090a61be59e6ddbda21197c89b0a80c4dc7ef91c5cb3e3c589c5d21b7018bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:57Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.251369 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://421aaf1e60edabf5e78959804ebeaf16249eb1a54982539cf787962cae2e3ad3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab37974e73b5031681843648a5e0979aaeb9f56415c35892f133a9de3e4ec5ed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:09:39Z\\\",\\\"message\\\":\\\":39.208520 6656 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:39.208645 6656 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:39.208680 6656 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:39.208718 6656 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:39.208751 6656 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:39.208791 6656 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:09:39.220373 6656 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 09:09:39.220427 6656 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0224 09:09:39.220477 6656 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0224 09:09:39.220514 6656 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 09:09:39.220546 6656 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 09:09:39.220593 6656 factory.go:656] Stopping watch factory\\\\nI0224 09:09:39.220631 6656 ovnkube.go:599] Stopped ovnkube\\\\nI0224 09:09:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://421aaf1e60edabf5e78959804ebeaf16249eb1a54982539cf787962cae2e3ad3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:09:56Z\\\",\\\"message\\\":\\\"top retrying failed objects of type *v1.Namespace\\\\nI0224 09:09:56.344555 6843 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 09:09:56.344017 6843 egressqos.go:301] Shutting down EgressQoS controller\\\\nI0224 09:09:56.344068 6843 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *factory.egressNode\\\\nI0224 09:09:56.344830 6843 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 09:09:56.344833 6843 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI0224 09:09:56.344833 6843 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 09:09:56.344841 6843 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0224 09:09:56.344835 6843 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0224 09:09:56.344852 6843 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 09:09:56.344833 6843 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 09:09:56.344991 6843 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0224 09:09:56.345011 6843 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0224 09:09:56.345033 6843 factory.go:656] Stopping watch factory\\\\nI0224 09:09:56.345055 6843 ovnkube.go:599] Stopped ovnkube\\\\nI0224 09:09:56.345090 6843 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0224 09\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:57Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.273582 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72cfa68249a7e55b890cfa5284ef4c5e10dae317eeb6be34edc9d45803723e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:57Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.294633 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:57Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.296218 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.296310 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.296337 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.296370 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.296393 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:57Z","lastTransitionTime":"2026-02-24T09:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.314760 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714c471f8ec139e9610d9c3e7942cc91fc7c3e09d6b9d83a9498acc804542540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d72e2a80f6b15e7b1f86206da1682dfae29bb1bf1dcca20ca38556ba3fc2c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:57Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.337376 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:09:57 crc kubenswrapper[4822]: E0224 09:09:57.337500 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.337741 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:09:57 crc kubenswrapper[4822]: E0224 09:09:57.337852 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.338162 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:09:57 crc kubenswrapper[4822]: E0224 09:09:57.338278 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.338483 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:09:57 crc kubenswrapper[4822]: E0224 09:09:57.338565 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.359477 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7acc17a2035f017ceaa15e39eb12606e38a5902fdc87c6671ac2628ebaa4a4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:57Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.365081 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.375774 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2f6d18-d7b6-4c05-a90c-e6bf83a58862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708b7e3ba090c334bf7618665cd0f95d512059d204892851f4005e741b29aedf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:09:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:09:01.996048 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:09:01.996167 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:09:01.997001 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3601362715/tls.crt::/tmp/serving-cert-3601362715/tls.key\\\\\\\"\\\\nI0224 09:09:02.405618 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:09:02.409894 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:09:02.409954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:09:02.410013 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:09:02.410028 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:09:02.418769 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:09:02.418799 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:09:02.418814 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418828 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418838 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:09:02.418846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:09:02.418853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:09:02.418860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:09:02.423036 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:57Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.395510 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888ca80a64aca876c966570ab2bca27d2f1d4e80fe42443a170382671bf5a002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a3802aceae54e33e3c19d855be2bd17cdb18cc4ff7110425685551e2fbff21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:57Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.406317 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.406380 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.406399 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.406431 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.406458 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:57Z","lastTransitionTime":"2026-02-24T09:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.420130 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:57Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.437011 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22174b11b108be065985e9809bc93e49dcb17e237f5054e2e5771b82e7e42f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:57Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.452530 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:57Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.471206 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96124054c109e534a338814cf39c5948da05e8ec3dd7e62985d440380310c97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:57Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.510237 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.510288 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.510305 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.510329 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.510348 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:57Z","lastTransitionTime":"2026-02-24T09:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.613949 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.614026 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.614046 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.614069 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.614087 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:57Z","lastTransitionTime":"2026-02-24T09:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.716848 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.716904 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.716961 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.716986 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.717004 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:57Z","lastTransitionTime":"2026-02-24T09:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.820358 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.820435 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.820459 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.820494 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.820520 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:57Z","lastTransitionTime":"2026-02-24T09:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.923582 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.923627 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.923648 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.923677 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:57 crc kubenswrapper[4822]: I0224 09:09:57.923702 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:57Z","lastTransitionTime":"2026-02-24T09:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.027166 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.027238 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.027262 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.027295 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.027319 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:58Z","lastTransitionTime":"2026-02-24T09:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.089044 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-669bp_72f416e6-5647-4b65-b06f-df73aca5e594/ovnkube-controller/2.log" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.094719 4822 scope.go:117] "RemoveContainer" containerID="421aaf1e60edabf5e78959804ebeaf16249eb1a54982539cf787962cae2e3ad3" Feb 24 09:09:58 crc kubenswrapper[4822]: E0224 09:09:58.095055 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-669bp_openshift-ovn-kubernetes(72f416e6-5647-4b65-b06f-df73aca5e594)\"" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.113809 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb999ada-8542-46ce-82c8-e8f469a5bc18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bb23ad458041922bc4ccc692449b952d9aab77f8badb82f1f33355cb82f899c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e084b810bf294f0dc2565fa6199fa484a66a33d2d9a1c2192a71e2a6fbd6cc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30bed1504be2ca00734175abe924a77562966abc782073db4d997ecf4bbb92c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1ef5ebe08827dc25ed17949b250929b0d9e9b5c3a68aad23ca6ba0b71c28462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1ef5ebe08827dc25ed17949b250929b0d9e9b5c3a68aad23ca6ba0b71c28462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.130776 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.130837 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.130856 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.130881 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.130898 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:58Z","lastTransitionTime":"2026-02-24T09:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.138768 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72cfa68249a7e55b890cfa5284ef4c5e10dae317eeb6be34edc9d45803723e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.158830 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.174782 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7acc17a2035f017ceaa15e39eb12606e38a5902fdc87c6671ac2628ebaa4a4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.198538 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2f6d18-d7b6-4c05-a90c-e6bf83a58862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708b7e3ba090c334bf7618665cd0f95d512059d204892851f4005e741b29aedf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:09:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:09:01.996048 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:09:01.996167 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:09:01.997001 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3601362715/tls.crt::/tmp/serving-cert-3601362715/tls.key\\\\\\\"\\\\nI0224 09:09:02.405618 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:09:02.409894 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:09:02.409954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:09:02.410013 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:09:02.410028 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:09:02.418769 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:09:02.418799 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:09:02.418814 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418828 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418838 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:09:02.418846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:09:02.418853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:09:02.418860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:09:02.423036 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.221085 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888ca80a64aca876c966570ab2bca27d2f1d4e80fe42443a170382671bf5a002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a3802aceae54e33e3c19d855be2bd17cdb18cc4ff7110425685551e2fbff21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.233672 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.233972 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.234335 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.234526 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.234648 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:09:58Z","lastTransitionTime":"2026-02-24T09:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.244535 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.263793 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22174b11b108be065985e9809bc93e49dcb17e237f5054e2e5771b82e7e42f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.282124 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.300605 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714c471f8ec139e9610d9c3e7942cc91fc7c3e09d6b9d83a9498acc804542540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d72e2a80f6b15e7b1f86206da1682dfae29bb1bf1dcca20ca38556ba3fc2c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.317760 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96124054c109e534a338814cf39c5948da05e8ec3dd7e62985d440380310c97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:58 crc kubenswrapper[4822]: E0224 09:09:58.335297 4822 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.354543 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba152bf-73f0-4fbd-9814-864ca1f28e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f0ebb59005449668c1b492bdc6155afb2b414ca3abca8e05e2b43aa59d23886\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9078ca880a9ab26a4423effb527cfd1cb2c5288403daf02462f1286ddac0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba71fec71e1dff40dd70be718d077ad208ca4aa83726e186ddfee661bcd468a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4764701e648ba7fc6285bf69cac5297af5a21b2e2b3904237e3afe0fd049b1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6240de59556b2c3fd8e26937945a53c5f216dce135f0b0024c1fd8ba8b1fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bf9901f8a5e1a52726573e43d614a77c1305b59426fead5d4f15e27c3ef06d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3bf9901f8a5e1a52726573e43d614a77c1305b59426fead5d4f15e27c3ef06d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b9168effe46ebec3888f923da1ee54e392258872f44ffabc71f5ec52ef4d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6b9168effe46ebec3888f923da1ee54e392258872f44ffabc71f5ec52ef4d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f0d6ba901fd909408637d654debaf0ccb14392ed6d2fa294e8ffb443f7a9ef80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d6ba901fd909408637d654debaf0ccb14392ed6d2fa294e8ffb443f7a9ef80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.375364 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b20f634d-42fc-410d-bd88-f8a5217a5100\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049e050ef9f1991307128e875cc6a4e29240a5e7c14c7e35686cef5ea40075d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://161dc5372c480c4c6401e2a366da2472613d6e1b70e07ed5d2b13e605149b71b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:08:24Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 09:08:00.651277 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 09:08:00.655420 1 observer_polling.go:159] Starting file observer\\\\nI0224 09:08:00.692273 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 09:08:00.698616 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 09:08:24.530014 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 09:08:24.530093 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:24Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd8556c3d55934657eb33e924c817164ec4c4ee3daf720283bd5d5a5eb849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5119b48d4016d083d02c6ccdb47e89a0d6b90e9907e9d21c0a37236be086453\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1dc0ba5c502740227408242b13ac68bf465d12f051a771295d69c245bc2b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.397207 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.415821 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4525470935b533581c4bf2d12d8cc18425b8fc363f2b501ab7773882832f112d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.434451 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://576c14462c61f59afdca39fedf2d6ddd7b7c3a779afcc6edbeab01c2fad8c6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d090a61be59e6ddbda21197c89b0a80c4dc7ef91c5cb3e3c589c5d21b7018bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.466949 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://421aaf1e60edabf5e78959804ebeaf16249eb1a54982539cf787962cae2e3ad3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://421aaf1e60edabf5e78959804ebeaf16249eb1a54982539cf787962cae2e3ad3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:09:56Z\\\",\\\"message\\\":\\\"top retrying failed objects of type *v1.Namespace\\\\nI0224 09:09:56.344555 6843 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 09:09:56.344017 6843 egressqos.go:301] Shutting down EgressQoS controller\\\\nI0224 09:09:56.344068 6843 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *factory.egressNode\\\\nI0224 09:09:56.344830 6843 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 09:09:56.344833 6843 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI0224 09:09:56.344833 6843 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 09:09:56.344841 6843 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0224 09:09:56.344835 6843 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0224 09:09:56.344852 6843 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 09:09:56.344833 6843 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 09:09:56.344991 6843 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0224 09:09:56.345011 6843 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0224 09:09:56.345033 6843 factory.go:656] Stopping watch factory\\\\nI0224 09:09:56.345055 6843 ovnkube.go:599] Stopped ovnkube\\\\nI0224 09:09:56.345090 6843 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0224 09\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-669bp_openshift-ovn-kubernetes(72f416e6-5647-4b65-b06f-df73aca5e594)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.486156 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1151d10473d8bce326113fb334f2863e8577b504973f59405b67008b2cd06581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:58 crc kubenswrapper[4822]: E0224 09:09:58.500340 4822 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.503968 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714c471f8ec139e9610d9c3e7942cc91fc7c3e09d6b9d83a9498acc804542540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d72e2a80f6b15e7b1f86206da1682dfae29bb1bf1dcca20ca38556ba3fc2c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.520253 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7acc17a2035f017ceaa15e39eb12606e38a5902fdc87c6671ac2628ebaa4a4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.535060 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2f6d18-d7b6-4c05-a90c-e6bf83a58862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708b7e3ba090c334bf7618665cd0f95d512059d204892851f4005e741b29aedf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:09:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:09:01.996048 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:09:01.996167 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:09:01.997001 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3601362715/tls.crt::/tmp/serving-cert-3601362715/tls.key\\\\\\\"\\\\nI0224 09:09:02.405618 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:09:02.409894 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:09:02.409954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:09:02.410013 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:09:02.410028 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:09:02.418769 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:09:02.418799 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:09:02.418814 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418828 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418838 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:09:02.418846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:09:02.418853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:09:02.418860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:09:02.423036 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.552150 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888ca80a64aca876c966570ab2bca27d2f1d4e80fe42443a170382671bf5a002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a3802aceae54e33e3c19d855be2bd17cdb18cc4ff7110425685551e2fbff21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.566884 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.581753 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22174b11b108be065985e9809bc93e49dcb17e237f5054e2e5771b82e7e42f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.599165 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.618754 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96124054c109e534a338814cf39c5948da05e8ec3dd7e62985d440380310c97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.634523 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1151d10473d8bce326113fb334f2863e8577b504973f59405b67008b2cd06581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.653404 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba152bf-73f0-4fbd-9814-864ca1f28e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f0ebb59005449668c1b492bdc6155afb2b414ca3abca8e05e2b43aa59d23886\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9078ca880a9ab26a4423effb527cfd1cb2c5288403daf02462f1286ddac0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba71fec71e1dff40dd70be718d077ad208ca4aa83726e186ddfee661bcd468a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4764701e648ba7fc6285bf69cac5297af5a21b2e2b3904237e3afe0fd049b1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6240de59556b2c3fd8e26937945a53c5f216dce135f0b0024c1fd8ba8b1fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bf9901f8a5e1a52726573e43d614a77c1305b59426fead5d4f15e27c3ef06d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3bf9901f8a5e1a52726573e43d614a77c1305b59426fead5d4f15e27c3ef06d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b9168effe46ebec3888f923da1ee54e392258872f44ffabc71f5ec52ef4d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6b9168effe46ebec3888f923da1ee54e392258872f44ffabc71f5ec52ef4d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f0d6ba901fd909408637d654debaf0ccb14392ed6d2fa294e8ffb443f7a9ef80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d6ba901fd909408637d654debaf0ccb14392ed6d2fa294e8ffb443f7a9ef80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.671347 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b20f634d-42fc-410d-bd88-f8a5217a5100\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049e050ef9f1991307128e875cc6a4e29240a5e7c14c7e35686cef5ea40075d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://161dc5372c480c4c6401e2a366da2472613d6e1b70e07ed5d2b13e605149b71b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:08:24Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 09:08:00.651277 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 09:08:00.655420 1 observer_polling.go:159] Starting file observer\\\\nI0224 09:08:00.692273 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 09:08:00.698616 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 09:08:24.530014 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 09:08:24.530093 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:24Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd8556c3d55934657eb33e924c817164ec4c4ee3daf720283bd5d5a5eb849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5119b48d4016d083d02c6ccdb47e89a0d6b90e9907e9d21c0a37236be086453\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1dc0ba5c502740227408242b13ac68bf465d12f051a771295d69c245bc2b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.688191 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.704313 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4525470935b533581c4bf2d12d8cc18425b8fc363f2b501ab7773882832f112d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.719615 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://576c14462c61f59afdca39fedf2d6ddd7b7c3a779afcc6edbeab01c2fad8c6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d090a61be59e6ddbda21197c89b0a80c4dc7ef91c5cb3e3c589c5d21b7018bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.740320 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://421aaf1e60edabf5e78959804ebeaf16249eb1a54982539cf787962cae2e3ad3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://421aaf1e60edabf5e78959804ebeaf16249eb1a54982539cf787962cae2e3ad3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:09:56Z\\\",\\\"message\\\":\\\"top retrying failed objects of type *v1.Namespace\\\\nI0224 09:09:56.344555 6843 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 09:09:56.344017 6843 egressqos.go:301] Shutting down EgressQoS controller\\\\nI0224 09:09:56.344068 6843 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *factory.egressNode\\\\nI0224 09:09:56.344830 6843 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 09:09:56.344833 6843 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI0224 09:09:56.344833 6843 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 09:09:56.344841 6843 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0224 09:09:56.344835 6843 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0224 09:09:56.344852 6843 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 09:09:56.344833 6843 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 09:09:56.344991 6843 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0224 09:09:56.345011 6843 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0224 09:09:56.345033 6843 factory.go:656] Stopping watch factory\\\\nI0224 09:09:56.345055 6843 ovnkube.go:599] Stopped ovnkube\\\\nI0224 09:09:56.345090 6843 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0224 09\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-669bp_openshift-ovn-kubernetes(72f416e6-5647-4b65-b06f-df73aca5e594)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.756501 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb999ada-8542-46ce-82c8-e8f469a5bc18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bb23ad458041922bc4ccc692449b952d9aab77f8badb82f1f33355cb82f899c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e084b810bf294f0dc2565fa6199fa484a66a33d2d9a1c2192a71e2a6fbd6cc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30bed1504be2ca00734175abe924a77562966abc782073db4d997ecf4bbb92c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1ef5ebe08827dc25ed17949b250929b0d9e9b5c3a68aad23ca6ba0b71c28462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1ef5ebe08827dc25ed17949b250929b0d9e9b5c3a68aad23ca6ba0b71c28462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.776331 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72cfa68249a7e55b890cfa5284ef4c5e10dae317eeb6be34edc9d45803723e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:58 crc kubenswrapper[4822]: I0224 09:09:58.795001 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:09:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:09:59 crc kubenswrapper[4822]: I0224 09:09:59.336921 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:09:59 crc kubenswrapper[4822]: E0224 09:09:59.337028 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:09:59 crc kubenswrapper[4822]: I0224 09:09:59.337070 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:09:59 crc kubenswrapper[4822]: I0224 09:09:59.337111 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:09:59 crc kubenswrapper[4822]: I0224 09:09:59.337070 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:09:59 crc kubenswrapper[4822]: E0224 09:09:59.337358 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:09:59 crc kubenswrapper[4822]: E0224 09:09:59.337745 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:09:59 crc kubenswrapper[4822]: E0224 09:09:59.337864 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:10:00 crc kubenswrapper[4822]: I0224 09:10:00.380015 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:00 crc kubenswrapper[4822]: I0224 09:10:00.380132 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:00 crc kubenswrapper[4822]: I0224 09:10:00.380158 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:00 crc kubenswrapper[4822]: I0224 09:10:00.380191 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:10:00 crc kubenswrapper[4822]: I0224 09:10:00.380217 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:10:00Z","lastTransitionTime":"2026-02-24T09:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:10:00 crc kubenswrapper[4822]: E0224 09:10:00.400833 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:00Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:00 crc kubenswrapper[4822]: I0224 09:10:00.406057 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:00 crc kubenswrapper[4822]: I0224 09:10:00.406123 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:00 crc kubenswrapper[4822]: I0224 09:10:00.406148 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:00 crc kubenswrapper[4822]: I0224 09:10:00.406175 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:10:00 crc kubenswrapper[4822]: I0224 09:10:00.406191 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:10:00Z","lastTransitionTime":"2026-02-24T09:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:10:00 crc kubenswrapper[4822]: E0224 09:10:00.425881 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:00Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:00 crc kubenswrapper[4822]: I0224 09:10:00.430987 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:00 crc kubenswrapper[4822]: I0224 09:10:00.431038 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:00 crc kubenswrapper[4822]: I0224 09:10:00.431062 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:00 crc kubenswrapper[4822]: I0224 09:10:00.431092 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:10:00 crc kubenswrapper[4822]: I0224 09:10:00.431115 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:10:00Z","lastTransitionTime":"2026-02-24T09:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:10:00 crc kubenswrapper[4822]: E0224 09:10:00.450449 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:00Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:00 crc kubenswrapper[4822]: I0224 09:10:00.455699 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:00 crc kubenswrapper[4822]: I0224 09:10:00.455761 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:00 crc kubenswrapper[4822]: I0224 09:10:00.455779 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:00 crc kubenswrapper[4822]: I0224 09:10:00.455804 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:10:00 crc kubenswrapper[4822]: I0224 09:10:00.455820 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:10:00Z","lastTransitionTime":"2026-02-24T09:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:10:00 crc kubenswrapper[4822]: E0224 09:10:00.475804 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:00Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:00 crc kubenswrapper[4822]: I0224 09:10:00.480611 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:00 crc kubenswrapper[4822]: I0224 09:10:00.480677 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:00 crc kubenswrapper[4822]: I0224 09:10:00.480695 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:00 crc kubenswrapper[4822]: I0224 09:10:00.480723 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:10:00 crc kubenswrapper[4822]: I0224 09:10:00.480743 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:10:00Z","lastTransitionTime":"2026-02-24T09:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:10:00 crc kubenswrapper[4822]: E0224 09:10:00.500523 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:00Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:00 crc kubenswrapper[4822]: E0224 09:10:00.500743 4822 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 09:10:01 crc kubenswrapper[4822]: I0224 09:10:01.336389 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:10:01 crc kubenswrapper[4822]: I0224 09:10:01.336434 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:10:01 crc kubenswrapper[4822]: E0224 09:10:01.336901 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:10:01 crc kubenswrapper[4822]: I0224 09:10:01.336548 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:10:01 crc kubenswrapper[4822]: I0224 09:10:01.336520 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:10:01 crc kubenswrapper[4822]: E0224 09:10:01.337081 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:10:01 crc kubenswrapper[4822]: E0224 09:10:01.337009 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:10:01 crc kubenswrapper[4822]: E0224 09:10:01.337339 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:10:03 crc kubenswrapper[4822]: I0224 09:10:03.336250 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:10:03 crc kubenswrapper[4822]: I0224 09:10:03.336310 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:10:03 crc kubenswrapper[4822]: E0224 09:10:03.336445 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:10:03 crc kubenswrapper[4822]: I0224 09:10:03.336467 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:10:03 crc kubenswrapper[4822]: I0224 09:10:03.336531 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:10:03 crc kubenswrapper[4822]: E0224 09:10:03.336682 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:10:03 crc kubenswrapper[4822]: E0224 09:10:03.336773 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:10:03 crc kubenswrapper[4822]: E0224 09:10:03.336896 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:10:03 crc kubenswrapper[4822]: E0224 09:10:03.502349 4822 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 09:10:05 crc kubenswrapper[4822]: I0224 09:10:05.336553 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:10:05 crc kubenswrapper[4822]: I0224 09:10:05.336607 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:10:05 crc kubenswrapper[4822]: I0224 09:10:05.336565 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:10:05 crc kubenswrapper[4822]: I0224 09:10:05.336708 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:10:05 crc kubenswrapper[4822]: E0224 09:10:05.336745 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:10:05 crc kubenswrapper[4822]: E0224 09:10:05.336830 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:10:05 crc kubenswrapper[4822]: E0224 09:10:05.336972 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:10:05 crc kubenswrapper[4822]: E0224 09:10:05.337156 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:10:06 crc kubenswrapper[4822]: I0224 09:10:06.010602 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:10:06 crc kubenswrapper[4822]: I0224 09:10:06.028583 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb999ada-8542-46ce-82c8-e8f469a5bc18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bb23ad458041922bc4ccc692449b952d9aab77f8badb82f1f33355cb82f899c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e084b810bf294f0dc2565fa6199fa484a66a33d2d9a1c2192a71e2a6fbd6cc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30bed1504be2ca00734175abe924a77562966abc782073db4d997ecf4bbb92c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1ef5ebe08827dc25ed17949b250929b0d9e9b5c3a68aad23ca6ba0b71c28462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1ef5ebe08827dc25ed17949b250929b0d9e9b5c3a68aad23ca6ba0b71c28462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:06Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:06 crc kubenswrapper[4822]: I0224 09:10:06.048609 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72cfa68249a7e55b890cfa5284ef4c5e10dae317eeb6be34edc9d45803723e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:06Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:06 crc kubenswrapper[4822]: I0224 09:10:06.070279 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:06Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:06 crc kubenswrapper[4822]: I0224 09:10:06.087201 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22174b11b108be065985e9809bc93e49dcb17e237f5054e2e5771b82e7e42f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:06Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:06 crc kubenswrapper[4822]: I0224 09:10:06.103579 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:06Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:06 crc kubenswrapper[4822]: I0224 09:10:06.122120 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714c471f8ec139e9610d9c3e7942cc91fc7c3e09d6b9d83a9498acc804542540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d72e2a80f6b15e7b1f86206da1682dfae29bb1bf1dcca20ca38556ba3fc2c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:06Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:06 crc kubenswrapper[4822]: I0224 09:10:06.138258 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7acc17a2035f017ceaa15e39eb12606e38a5902fdc87c6671ac2628ebaa4a4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:06Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:06 crc kubenswrapper[4822]: I0224 09:10:06.160058 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2f6d18-d7b6-4c05-a90c-e6bf83a58862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708b7e3ba090c334bf7618665cd0f95d512059d204892851f4005e741b29aedf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:09:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:09:01.996048 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:09:01.996167 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:09:01.997001 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3601362715/tls.crt::/tmp/serving-cert-3601362715/tls.key\\\\\\\"\\\\nI0224 09:09:02.405618 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:09:02.409894 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:09:02.409954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:09:02.410013 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:09:02.410028 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:09:02.418769 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:09:02.418799 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:09:02.418814 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418828 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418838 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:09:02.418846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:09:02.418853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:09:02.418860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:09:02.423036 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:06Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:06 crc kubenswrapper[4822]: I0224 09:10:06.180670 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888ca80a64aca876c966570ab2bca27d2f1d4e80fe42443a170382671bf5a002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a3802aceae54e33e3c19d855be2bd17cdb18cc4ff7110425685551e2fbff21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:06Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:06 crc kubenswrapper[4822]: I0224 09:10:06.200902 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:06Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:06 crc kubenswrapper[4822]: I0224 09:10:06.220377 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96124054c109e534a338814cf39c5948da05e8ec3dd7e62985d440380310c97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:06Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:06 crc kubenswrapper[4822]: I0224 09:10:06.237614 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://576c14462c61f59afdca39fedf2d6ddd7b7c3a779afcc6edbeab01c2fad8c6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d090a61be59e6ddbda21197c89b0a80c4dc7ef91c5cb3e3c589c5d21b7018bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:06Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:06 crc kubenswrapper[4822]: I0224 09:10:06.268119 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://421aaf1e60edabf5e78959804ebeaf16249eb1a54982539cf787962cae2e3ad3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://421aaf1e60edabf5e78959804ebeaf16249eb1a54982539cf787962cae2e3ad3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:09:56Z\\\",\\\"message\\\":\\\"top retrying failed objects of type *v1.Namespace\\\\nI0224 09:09:56.344555 6843 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 09:09:56.344017 6843 egressqos.go:301] Shutting down EgressQoS controller\\\\nI0224 09:09:56.344068 6843 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *factory.egressNode\\\\nI0224 09:09:56.344830 6843 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 09:09:56.344833 6843 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI0224 09:09:56.344833 6843 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 09:09:56.344841 6843 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0224 09:09:56.344835 6843 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0224 09:09:56.344852 6843 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 09:09:56.344833 6843 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 09:09:56.344991 6843 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0224 09:09:56.345011 6843 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0224 09:09:56.345033 6843 factory.go:656] Stopping watch factory\\\\nI0224 09:09:56.345055 6843 ovnkube.go:599] Stopped ovnkube\\\\nI0224 09:09:56.345090 6843 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0224 09\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-669bp_openshift-ovn-kubernetes(72f416e6-5647-4b65-b06f-df73aca5e594)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:06Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:06 crc kubenswrapper[4822]: I0224 09:10:06.291998 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1151d10473d8bce326113fb334f2863e8577b504973f59405b67008b2cd06581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:06Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:06 crc kubenswrapper[4822]: I0224 09:10:06.323961 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba152bf-73f0-4fbd-9814-864ca1f28e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f0ebb59005449668c1b492bdc6155afb2b414ca3abca8e05e2b43aa59d23886\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9078ca880a9ab26a4423effb527cfd1cb2c5288403daf02462f1286ddac0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba71fec71e1dff40dd70be718d077ad208ca4aa83726e186ddfee661bcd468a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4764701e648ba7fc6285bf69cac5297af5a21b2e2b3904237e3afe0fd049b1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6240de59556b2c3fd8e26937945a53c5f216dce135f0b0024c1fd8ba8b1fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bf9901f8a5e1a52726573e43d614a77c1305b59426fead5d4f15e27c3ef06d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3bf9901f8a5e1a52726573e43d614a77c1305b59426fead5d4f15e27c3ef06d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b9168effe46ebec3888f923da1ee54e392258872f44ffabc71f5ec52ef4d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6b9168effe46ebec3888f923da1ee54e392258872f44ffabc71f5ec52ef4d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f0d6ba901fd909408637d654debaf0ccb14392ed6d2fa294e8ffb443f7a9ef80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d6ba901fd909408637d654debaf0ccb14392ed6d2fa294e8ffb443f7a9ef80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:06Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:06 crc kubenswrapper[4822]: I0224 09:10:06.345284 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b20f634d-42fc-410d-bd88-f8a5217a5100\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049e050ef9f1991307128e875cc6a4e29240a5e7c14c7e35686cef5ea40075d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://161dc5372c480c4c6401e2a366da2472613d6e1b70e07ed5d2b13e605149b71b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:08:24Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 09:08:00.651277 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 09:08:00.655420 1 observer_polling.go:159] Starting file observer\\\\nI0224 09:08:00.692273 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 09:08:00.698616 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 09:08:24.530014 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 09:08:24.530093 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:24Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd8556c3d55934657eb33e924c817164ec4c4ee3daf720283bd5d5a5eb849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5119b48d4016d083d02c6ccdb47e89a0d6b90e9907e9d21c0a37236be086453\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1dc0ba5c502740227408242b13ac68bf465d12f051a771295d69c245bc2b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:06Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:06 crc kubenswrapper[4822]: I0224 09:10:06.366184 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:06Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:06 crc kubenswrapper[4822]: I0224 09:10:06.383867 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4525470935b533581c4bf2d12d8cc18425b8fc363f2b501ab7773882832f112d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:06Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:07 crc kubenswrapper[4822]: I0224 09:10:07.336301 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:10:07 crc kubenswrapper[4822]: I0224 09:10:07.336346 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:10:07 crc kubenswrapper[4822]: I0224 09:10:07.336353 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:10:07 crc kubenswrapper[4822]: I0224 09:10:07.336441 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:10:07 crc kubenswrapper[4822]: E0224 09:10:07.336660 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:10:07 crc kubenswrapper[4822]: E0224 09:10:07.336859 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:10:07 crc kubenswrapper[4822]: E0224 09:10:07.337101 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:10:07 crc kubenswrapper[4822]: E0224 09:10:07.337181 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:10:08 crc kubenswrapper[4822]: I0224 09:10:08.360843 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:08Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:08 crc kubenswrapper[4822]: I0224 09:10:08.382238 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4525470935b533581c4bf2d12d8cc18425b8fc363f2b501ab7773882832f112d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:08Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:08 crc kubenswrapper[4822]: I0224 09:10:08.399417 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://576c14462c61f59afdca39fedf2d6ddd7b7c3a779afcc6edbeab01c2fad8c6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d090a61be59e6ddbda21197c89b0a80c4dc7ef91c5cb3e3c589c5d21b7018bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:08Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:08 crc kubenswrapper[4822]: I0224 09:10:08.431291 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://421aaf1e60edabf5e78959804ebeaf16249eb1a54982539cf787962cae2e3ad3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://421aaf1e60edabf5e78959804ebeaf16249eb1a54982539cf787962cae2e3ad3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:09:56Z\\\",\\\"message\\\":\\\"top retrying failed objects of type *v1.Namespace\\\\nI0224 09:09:56.344555 6843 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 09:09:56.344017 6843 egressqos.go:301] Shutting down EgressQoS controller\\\\nI0224 09:09:56.344068 6843 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *factory.egressNode\\\\nI0224 09:09:56.344830 6843 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 09:09:56.344833 6843 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI0224 09:09:56.344833 6843 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 09:09:56.344841 6843 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0224 09:09:56.344835 6843 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0224 09:09:56.344852 6843 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 09:09:56.344833 6843 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 09:09:56.344991 6843 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0224 09:09:56.345011 6843 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0224 09:09:56.345033 6843 factory.go:656] Stopping watch factory\\\\nI0224 09:09:56.345055 6843 ovnkube.go:599] Stopped ovnkube\\\\nI0224 09:09:56.345090 6843 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0224 09\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-669bp_openshift-ovn-kubernetes(72f416e6-5647-4b65-b06f-df73aca5e594)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:08Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:08 crc kubenswrapper[4822]: I0224 09:10:08.459432 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1151d10473d8bce326113fb334f2863e8577b504973f59405b67008b2cd06581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:08Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:08 crc kubenswrapper[4822]: I0224 09:10:08.503091 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba152bf-73f0-4fbd-9814-864ca1f28e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f0ebb59005449668c1b492bdc6155afb2b414ca3abca8e05e2b43aa59d23886\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9078ca880a9ab26a4423effb527cfd1cb2c5288403daf02462f1286ddac0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba71fec71e1dff40dd70be718d077ad208ca4aa83726e186ddfee661bcd468a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4764701e648ba7fc6285bf69cac5297af5a21b2e2b3904237e3afe0fd049b1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6240de59556b2c3fd8e26937945a53c5f216dce135f0b0024c1fd8ba8b1fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bf9901f8a5e1a52726573e43d614a77c1305b59426fead5d4f15e27c3ef06d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3bf9901f8a5e1a52726573e43d614a77c1305b59426fead5d4f15e27c3ef06d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b9168effe46ebec3888f923da1ee54e392258872f44ffabc71f5ec52ef4d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6b9168effe46ebec3888f923da1ee54e392258872f44ffabc71f5ec52ef4d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f0d6ba901fd909408637d654debaf0ccb14392ed6d2fa294e8ffb443f7a9ef80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d6ba901fd909408637d654debaf0ccb14392ed6d2fa294e8ffb443f7a9ef80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:08Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:08 crc kubenswrapper[4822]: E0224 09:10:08.503263 4822 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 09:10:08 crc kubenswrapper[4822]: I0224 09:10:08.529267 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b20f634d-42fc-410d-bd88-f8a5217a5100\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049e050ef9f1991307128e875cc6a4e29240a5e7c14c7e35686cef5ea40075d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://161dc5372c480c4c6401e2a366da2472613d6e1b70e07ed5d2b13e605149b71b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:08:24Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 09:08:00.651277 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 09:08:00.655420 1 observer_polling.go:159] Starting file observer\\\\nI0224 09:08:00.692273 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 09:08:00.698616 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 09:08:24.530014 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 09:08:24.530093 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:24Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd8556c3d55934657eb33e924c817164ec4c4ee3daf720283bd5d5a5eb849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5119b48d4016d083d02c6ccdb47e89a0d6b90e9907e9d21c0a37236be086453\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1dc0ba5c502740227408242b13ac68bf465d12f051a771295d69c245bc2b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:08Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:08 crc kubenswrapper[4822]: I0224 09:10:08.553337 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:08Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:08 crc kubenswrapper[4822]: I0224 09:10:08.577510 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb999ada-8542-46ce-82c8-e8f469a5bc18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bb23ad458041922bc4ccc692449b952d9aab77f8badb82f1f33355cb82f899c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e084b810bf294f0dc2565fa6199fa484a66a33d2d9a1c2192a71e2a6fbd6cc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30bed1504be2ca00734175abe924a77562966abc782073db4d997ecf4bbb92c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1ef5ebe08827dc25ed17949b250929b0d9e9b5c3a68aad23ca6ba0b71c28462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1ef5ebe08827dc25ed17949b250929b0d9e9b5c3a68aad23ca6ba0b71c28462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:08Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:08 crc kubenswrapper[4822]: I0224 09:10:08.599822 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72cfa68249a7e55b890cfa5284ef4c5e10dae317eeb6be34edc9d45803723e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:08Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:08 crc kubenswrapper[4822]: I0224 09:10:08.621112 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888ca80a64aca876c966570ab2bca27d2f1d4e80fe42443a170382671bf5a002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a3802aceae54e33e3c19d855be2bd17cdb18cc4ff7110425685551e2fbff21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:08Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:08 crc kubenswrapper[4822]: I0224 09:10:08.641770 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:08Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:08 crc kubenswrapper[4822]: I0224 09:10:08.658291 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22174b11b108be065985e9809bc93e49dcb17e237f5054e2e5771b82e7e42f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:08Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:08 crc kubenswrapper[4822]: I0224 09:10:08.673883 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:08Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:08 crc kubenswrapper[4822]: I0224 09:10:08.692170 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714c471f8ec139e9610d9c3e7942cc91fc7c3e09d6b9d83a9498acc804542540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d72e2a80f6b15e7b1f86206da1682dfae29bb1bf1dcca20ca38556ba3fc2c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:08Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:08 crc kubenswrapper[4822]: I0224 09:10:08.708044 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7acc17a2035f017ceaa15e39eb12606e38a5902fdc87c6671ac2628ebaa4a4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:08Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:08 crc kubenswrapper[4822]: I0224 09:10:08.729102 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2f6d18-d7b6-4c05-a90c-e6bf83a58862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708b7e3ba090c334bf7618665cd0f95d512059d204892851f4005e741b29aedf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:09:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:09:01.996048 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:09:01.996167 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:09:01.997001 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3601362715/tls.crt::/tmp/serving-cert-3601362715/tls.key\\\\\\\"\\\\nI0224 09:09:02.405618 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:09:02.409894 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:09:02.409954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:09:02.410013 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:09:02.410028 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:09:02.418769 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:09:02.418799 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:09:02.418814 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418828 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418838 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:09:02.418846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:09:02.418853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:09:02.418860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:09:02.423036 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:08Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:08 crc kubenswrapper[4822]: I0224 09:10:08.749564 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96124054c109e534a338814cf39c5948da05e8ec3dd7e62985d440380310c97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:08Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:09 crc kubenswrapper[4822]: I0224 09:10:09.336969 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:10:09 crc kubenswrapper[4822]: I0224 09:10:09.336973 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:10:09 crc kubenswrapper[4822]: I0224 09:10:09.336978 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:10:09 crc kubenswrapper[4822]: I0224 09:10:09.336996 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:10:09 crc kubenswrapper[4822]: E0224 09:10:09.337492 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:10:09 crc kubenswrapper[4822]: E0224 09:10:09.337644 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:10:09 crc kubenswrapper[4822]: E0224 09:10:09.337748 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:10:09 crc kubenswrapper[4822]: E0224 09:10:09.337870 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:10:09 crc kubenswrapper[4822]: I0224 09:10:09.338064 4822 scope.go:117] "RemoveContainer" containerID="421aaf1e60edabf5e78959804ebeaf16249eb1a54982539cf787962cae2e3ad3" Feb 24 09:10:09 crc kubenswrapper[4822]: E0224 09:10:09.338469 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-669bp_openshift-ovn-kubernetes(72f416e6-5647-4b65-b06f-df73aca5e594)\"" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" Feb 24 09:10:10 crc kubenswrapper[4822]: I0224 09:10:10.864815 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:10 crc kubenswrapper[4822]: I0224 09:10:10.864878 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:10 crc kubenswrapper[4822]: I0224 09:10:10.864901 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:10 crc kubenswrapper[4822]: I0224 09:10:10.864974 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:10:10 crc kubenswrapper[4822]: I0224 09:10:10.865001 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:10:10Z","lastTransitionTime":"2026-02-24T09:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:10:10 crc kubenswrapper[4822]: E0224 09:10:10.886202 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:10Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:10 crc kubenswrapper[4822]: I0224 09:10:10.891137 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:10 crc kubenswrapper[4822]: I0224 09:10:10.891197 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:10 crc kubenswrapper[4822]: I0224 09:10:10.891221 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:10 crc kubenswrapper[4822]: I0224 09:10:10.891251 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:10:10 crc kubenswrapper[4822]: I0224 09:10:10.891277 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:10:10Z","lastTransitionTime":"2026-02-24T09:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:10:10 crc kubenswrapper[4822]: E0224 09:10:10.911149 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:10Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:10 crc kubenswrapper[4822]: I0224 09:10:10.916384 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:10 crc kubenswrapper[4822]: I0224 09:10:10.916450 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:10 crc kubenswrapper[4822]: I0224 09:10:10.916474 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:10 crc kubenswrapper[4822]: I0224 09:10:10.916503 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:10:10 crc kubenswrapper[4822]: I0224 09:10:10.916526 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:10:10Z","lastTransitionTime":"2026-02-24T09:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:10:10 crc kubenswrapper[4822]: E0224 09:10:10.936220 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:10Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:10 crc kubenswrapper[4822]: I0224 09:10:10.941107 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:10 crc kubenswrapper[4822]: I0224 09:10:10.941161 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:10 crc kubenswrapper[4822]: I0224 09:10:10.941181 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:10 crc kubenswrapper[4822]: I0224 09:10:10.941204 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:10:10 crc kubenswrapper[4822]: I0224 09:10:10.941221 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:10:10Z","lastTransitionTime":"2026-02-24T09:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:10:10 crc kubenswrapper[4822]: E0224 09:10:10.958019 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:10Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:10 crc kubenswrapper[4822]: I0224 09:10:10.962601 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:10 crc kubenswrapper[4822]: I0224 09:10:10.962716 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:10 crc kubenswrapper[4822]: I0224 09:10:10.962734 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:10 crc kubenswrapper[4822]: I0224 09:10:10.962760 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:10:10 crc kubenswrapper[4822]: I0224 09:10:10.962777 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:10:10Z","lastTransitionTime":"2026-02-24T09:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:10:10 crc kubenswrapper[4822]: E0224 09:10:10.982467 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:10Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:10 crc kubenswrapper[4822]: E0224 09:10:10.982697 4822 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 09:10:11 crc kubenswrapper[4822]: I0224 09:10:11.336384 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:10:11 crc kubenswrapper[4822]: I0224 09:10:11.336436 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:10:11 crc kubenswrapper[4822]: I0224 09:10:11.336384 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:10:11 crc kubenswrapper[4822]: E0224 09:10:11.336574 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:10:11 crc kubenswrapper[4822]: E0224 09:10:11.336728 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:10:11 crc kubenswrapper[4822]: E0224 09:10:11.336895 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:10:11 crc kubenswrapper[4822]: I0224 09:10:11.337008 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:10:11 crc kubenswrapper[4822]: E0224 09:10:11.337107 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:10:13 crc kubenswrapper[4822]: I0224 09:10:13.336204 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:10:13 crc kubenswrapper[4822]: E0224 09:10:13.337121 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:10:13 crc kubenswrapper[4822]: I0224 09:10:13.336378 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:10:13 crc kubenswrapper[4822]: I0224 09:10:13.336598 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:10:13 crc kubenswrapper[4822]: I0224 09:10:13.336298 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:10:13 crc kubenswrapper[4822]: E0224 09:10:13.337798 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:10:13 crc kubenswrapper[4822]: E0224 09:10:13.338077 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:10:13 crc kubenswrapper[4822]: E0224 09:10:13.337598 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:10:13 crc kubenswrapper[4822]: E0224 09:10:13.505595 4822 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 09:10:15 crc kubenswrapper[4822]: I0224 09:10:15.336532 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:10:15 crc kubenswrapper[4822]: E0224 09:10:15.336732 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:10:15 crc kubenswrapper[4822]: I0224 09:10:15.336800 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:10:15 crc kubenswrapper[4822]: I0224 09:10:15.336871 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:10:15 crc kubenswrapper[4822]: I0224 09:10:15.336879 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:10:15 crc kubenswrapper[4822]: E0224 09:10:15.337017 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:10:15 crc kubenswrapper[4822]: E0224 09:10:15.337111 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:10:15 crc kubenswrapper[4822]: E0224 09:10:15.337227 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:10:15 crc kubenswrapper[4822]: I0224 09:10:15.354806 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 24 09:10:17 crc kubenswrapper[4822]: I0224 09:10:17.336828 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:10:17 crc kubenswrapper[4822]: I0224 09:10:17.336828 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:10:17 crc kubenswrapper[4822]: E0224 09:10:17.337327 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:10:17 crc kubenswrapper[4822]: I0224 09:10:17.336973 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:10:17 crc kubenswrapper[4822]: I0224 09:10:17.336881 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:10:17 crc kubenswrapper[4822]: E0224 09:10:17.337636 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:10:17 crc kubenswrapper[4822]: E0224 09:10:17.337748 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:10:17 crc kubenswrapper[4822]: E0224 09:10:17.337808 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.173013 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqrzq_90b654a4-010b-4a5e-b2d8-d42764fcb628/kube-multus/0.log" Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.173103 4822 generic.go:334] "Generic (PLEG): container finished" podID="90b654a4-010b-4a5e-b2d8-d42764fcb628" containerID="96124054c109e534a338814cf39c5948da05e8ec3dd7e62985d440380310c97a" exitCode=1 Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.173151 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lqrzq" event={"ID":"90b654a4-010b-4a5e-b2d8-d42764fcb628","Type":"ContainerDied","Data":"96124054c109e534a338814cf39c5948da05e8ec3dd7e62985d440380310c97a"} Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.173729 4822 scope.go:117] "RemoveContainer" containerID="96124054c109e534a338814cf39c5948da05e8ec3dd7e62985d440380310c97a" Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.195780 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb999ada-8542-46ce-82c8-e8f469a5bc18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bb23ad458041922bc4ccc692449b952d9aab77f8badb82f1f33355cb82f899c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e084b810bf294f0dc2565fa6199fa484a66a33d2d9a1c2192a71e2a6fbd6cc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30bed1504be2ca00734175abe924a77562966abc782073db4d997ecf4bbb92c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1ef5ebe08827dc25ed17949b250929b0d9e9b5c3a68aad23ca6ba0b71c28462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1ef5ebe08827dc25ed17949b250929b0d9e9b5c3a68aad23ca6ba0b71c28462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.216039 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72cfa68249a7e55b890cfa5284ef4c5e10dae317eeb6be34edc9d45803723e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.239067 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.257865 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.277795 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714c471f8ec139e9610d9c3e7942cc91fc7c3e09d6b9d83a9498acc804542540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d72e2a80f6b15e7b1f86206da1682dfae29bb1bf1dcca20ca38556ba3fc2c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.294413 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7acc17a2035f017ceaa15e39eb12606e38a5902fdc87c6671ac2628ebaa4a4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.317251 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2f6d18-d7b6-4c05-a90c-e6bf83a58862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708b7e3ba090c334bf7618665cd0f95d512059d204892851f4005e741b29aedf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:09:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:09:01.996048 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:09:01.996167 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:09:01.997001 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3601362715/tls.crt::/tmp/serving-cert-3601362715/tls.key\\\\\\\"\\\\nI0224 09:09:02.405618 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:09:02.409894 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:09:02.409954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:09:02.410013 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:09:02.410028 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:09:02.418769 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:09:02.418799 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:09:02.418814 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418828 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418838 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:09:02.418846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:09:02.418853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:09:02.418860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:09:02.423036 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.333983 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffd53620-3610-479d-8061-abfe62314da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bdde25238c492704d6d80e1023ad200b9fd1bd5d319b516ccc24b2aeea4fd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aabc4776b2fe2c18694894241153d4af330a61847bbb0ca4e8554c9526112c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aabc4776b2fe2c18694894241153d4af330a61847bbb0ca4e8554c9526112c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.360600 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888ca80a64aca876c966570ab2bca27d2f1d4e80fe42443a170382671bf5a002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a3802aceae54e33e3c19d855be2bd17cdb18cc4ff7110425685551e2fbff21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.381582 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.397481 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22174b11b108be065985e9809bc93e49dcb17e237f5054e2e5771b82e7e42f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.417420 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96124054c109e534a338814cf39c5948da05e8ec3dd7e62985d440380310c97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96124054c109e534a338814cf39c5948da05e8ec3dd7e62985d440380310c97a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:10:17Z\\\",\\\"message\\\":\\\"2026-02-24T09:09:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c2d0b039-b7c7-4b61-b7d6-e17e88fd37b8\\\\n2026-02-24T09:09:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c2d0b039-b7c7-4b61-b7d6-e17e88fd37b8 to /host/opt/cni/bin/\\\\n2026-02-24T09:09:32Z [verbose] multus-daemon started\\\\n2026-02-24T09:09:32Z [verbose] Readiness Indicator file check\\\\n2026-02-24T09:10:17Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.448390 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://421aaf1e60edabf5e78959804ebeaf16249eb1a54982539cf787962cae2e3ad3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://421aaf1e60edabf5e78959804ebeaf16249eb1a54982539cf787962cae2e3ad3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:09:56Z\\\",\\\"message\\\":\\\"top retrying failed objects of type *v1.Namespace\\\\nI0224 09:09:56.344555 6843 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 09:09:56.344017 6843 egressqos.go:301] Shutting down EgressQoS controller\\\\nI0224 09:09:56.344068 6843 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *factory.egressNode\\\\nI0224 09:09:56.344830 6843 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 09:09:56.344833 6843 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI0224 09:09:56.344833 6843 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 09:09:56.344841 6843 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0224 09:09:56.344835 6843 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0224 09:09:56.344852 6843 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 09:09:56.344833 6843 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 09:09:56.344991 6843 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0224 09:09:56.345011 6843 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0224 09:09:56.345033 6843 factory.go:656] Stopping watch factory\\\\nI0224 09:09:56.345055 6843 ovnkube.go:599] Stopped ovnkube\\\\nI0224 09:09:56.345090 6843 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0224 09\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-669bp_openshift-ovn-kubernetes(72f416e6-5647-4b65-b06f-df73aca5e594)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.471427 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1151d10473d8bce326113fb334f2863e8577b504973f59405b67008b2cd06581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:18 crc kubenswrapper[4822]: E0224 09:10:18.506374 4822 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.506496 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba152bf-73f0-4fbd-9814-864ca1f28e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f0ebb59005449668c1b492bdc6155afb2b414ca3abca8e05e2b43aa59d23886\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9078ca880a9ab26a4423effb527cfd1cb2c5288403daf02462f1286ddac0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba71fec71e1dff40dd70be718d077ad208ca4aa83726e186ddfee661bcd468a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4764701e648ba7fc6285bf69cac5297af5a21b2e2b3904237e3afe0fd049b1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6240de59556b2c3fd8e26937945a53c5f216dce135f0b0024c1fd8ba8b1fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bf9901f8a5e1a52726573e43d614a77c1305b59426fead5d4f15e27c3ef06d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3bf9901f8a5e1a52726573e43d614a77c1305b59426fead5d4f15e27c3ef06d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b9168effe46ebec3888f923da1ee54e392258872f44ffabc71f5ec52ef4d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6b9168effe46ebec3888f923da1ee54e392258872f44ffabc71f5ec52ef4d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f0d6ba901fd909408637d654debaf0ccb14392ed6d2fa294e8ffb443f7a9ef80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d6ba901fd909408637d654debaf0ccb14392ed6d2fa294e8ffb443f7a9ef80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.534550 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b20f634d-42fc-410d-bd88-f8a5217a5100\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049e050ef9f1991307128e875cc6a4e29240a5e7c14c7e35686cef5ea40075d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://161dc5372c480c4c6401e2a366da2472613d6e1b70e07ed5d2b13e605149b71b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:08:24Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 09:08:00.651277 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 09:08:00.655420 1 observer_polling.go:159] Starting file observer\\\\nI0224 09:08:00.692273 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 09:08:00.698616 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 09:08:24.530014 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 09:08:24.530093 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:24Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd8556c3d55934657eb33e924c817164ec4c4ee3daf720283bd5d5a5eb849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5119b48d4016d083d02c6ccdb47e89a0d6b90e9907e9d21c0a37236be086453\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1dc0ba5c502740227408242b13ac68bf465d12f051a771295d69c245bc2b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.554966 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.574947 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4525470935b533581c4bf2d12d8cc18425b8fc363f2b501ab7773882832f112d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.594004 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://576c14462c61f59afdca39fedf2d6ddd7b7c3a779afcc6edbeab01c2fad8c6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d090a61be59e6ddbda21197c89b0a80c4dc7ef91c5cb3e3c589c5d21b7018bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.628953 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://421aaf1e60edabf5e78959804ebeaf16249eb1a54982539cf787962cae2e3ad3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://421aaf1e60edabf5e78959804ebeaf16249eb1a54982539cf787962cae2e3ad3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:09:56Z\\\",\\\"message\\\":\\\"top retrying failed objects of type *v1.Namespace\\\\nI0224 09:09:56.344555 6843 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 09:09:56.344017 6843 egressqos.go:301] Shutting down EgressQoS controller\\\\nI0224 09:09:56.344068 6843 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *factory.egressNode\\\\nI0224 09:09:56.344830 6843 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 09:09:56.344833 6843 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI0224 09:09:56.344833 6843 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 09:09:56.344841 6843 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0224 09:09:56.344835 6843 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0224 09:09:56.344852 6843 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 09:09:56.344833 6843 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 09:09:56.344991 6843 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0224 09:09:56.345011 6843 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0224 09:09:56.345033 6843 factory.go:656] Stopping watch factory\\\\nI0224 09:09:56.345055 6843 ovnkube.go:599] Stopped ovnkube\\\\nI0224 09:09:56.345090 6843 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0224 09\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-669bp_openshift-ovn-kubernetes(72f416e6-5647-4b65-b06f-df73aca5e594)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.652960 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1151d10473d8bce326113fb334f2863e8577b504973f59405b67008b2cd06581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.682284 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba152bf-73f0-4fbd-9814-864ca1f28e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f0ebb59005449668c1b492bdc6155afb2b414ca3abca8e05e2b43aa59d23886\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9078ca880a9ab26a4423effb527cfd1cb2c5288403daf02462f1286ddac0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba71fec71e1dff40dd70be718d077ad208ca4aa83726e186ddfee661bcd468a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4764701e648ba7fc6285bf69cac5297af5a21b2e2b3904237e3afe0fd049b1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6240de59556b2c3fd8e26937945a53c5f216dce135f0b0024c1fd8ba8b1fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bf9901f8a5e1a52726573e43d614a77c1305b59426fead5d4f15e27c3ef06d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3bf9901f8a5e1a52726573e43d614a77c1305b59426fead5d4f15e27c3ef06d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b9168effe46ebec3888f923da1ee54e392258872f44ffabc71f5ec52ef4d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6b9168effe46ebec3888f923da1ee54e392258872f44ffabc71f5ec52ef4d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f0d6ba901fd909408637d654debaf0ccb14392ed6d2fa294e8ffb443f7a9ef80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d6ba901fd909408637d654debaf0ccb14392ed6d2fa294e8ffb443f7a9ef80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.699533 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b20f634d-42fc-410d-bd88-f8a5217a5100\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049e050ef9f1991307128e875cc6a4e29240a5e7c14c7e35686cef5ea40075d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://161dc5372c480c4c6401e2a366da2472613d6e1b70e07ed5d2b13e605149b71b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:08:24Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 09:08:00.651277 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 09:08:00.655420 1 observer_polling.go:159] Starting file observer\\\\nI0224 09:08:00.692273 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 09:08:00.698616 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 09:08:24.530014 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 09:08:24.530093 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:24Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd8556c3d55934657eb33e924c817164ec4c4ee3daf720283bd5d5a5eb849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5119b48d4016d083d02c6ccdb47e89a0d6b90e9907e9d21c0a37236be086453\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1dc0ba5c502740227408242b13ac68bf465d12f051a771295d69c245bc2b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.715099 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.732888 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4525470935b533581c4bf2d12d8cc18425b8fc363f2b501ab7773882832f112d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.748387 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://576c14462c61f59afdca39fedf2d6ddd7b7c3a779afcc6edbeab01c2fad8c6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d090a61be59e6ddbda21197c89b0a80c4dc7ef91c5cb3e3c589c5d21b7018bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.765272 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb999ada-8542-46ce-82c8-e8f469a5bc18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bb23ad458041922bc4ccc692449b952d9aab77f8badb82f1f33355cb82f899c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e084b810bf294f0dc2565fa6199fa484a66a33d2d9a1c2192a71e2a6fbd6cc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30bed1504be2ca00734175abe924a77562966abc782073db4d997ecf4bbb92c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1ef5ebe08827dc25ed17949b250929b0d9e9b5c3a68aad23ca6ba0b71c28462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1ef5ebe08827dc25ed17949b250929b0d9e9b5c3a68aad23ca6ba0b71c28462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.786957 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72cfa68249a7e55b890cfa5284ef4c5e10dae317eeb6be34edc9d45803723e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.807498 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.821556 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.838840 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714c471f8ec139e9610d9c3e7942cc91fc7c3e09d6b9d83a9498acc804542540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d72e2a80f6b15e7b1f86206da1682dfae29bb1bf1dcca20ca38556ba3fc2c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.853364 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7acc17a2035f017ceaa15e39eb12606e38a5902fdc87c6671ac2628ebaa4a4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.885051 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2f6d18-d7b6-4c05-a90c-e6bf83a58862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708b7e3ba090c334bf7618665cd0f95d512059d204892851f4005e741b29aedf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:09:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:09:01.996048 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:09:01.996167 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:09:01.997001 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3601362715/tls.crt::/tmp/serving-cert-3601362715/tls.key\\\\\\\"\\\\nI0224 09:09:02.405618 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:09:02.409894 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:09:02.409954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:09:02.410013 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:09:02.410028 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:09:02.418769 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:09:02.418799 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:09:02.418814 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418828 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418838 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:09:02.418846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:09:02.418853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:09:02.418860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:09:02.423036 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.902119 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffd53620-3610-479d-8061-abfe62314da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bdde25238c492704d6d80e1023ad200b9fd1bd5d319b516ccc24b2aeea4fd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aabc4776b2fe2c18694894241153d4af330a61847bbb0ca4e8554c9526112c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aabc4776b2fe2c18694894241153d4af330a61847bbb0ca4e8554c9526112c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.922552 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888ca80a64aca876c966570ab2bca27d2f1d4e80fe42443a170382671bf5a002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a3802aceae54e33e3c19d855be2bd17cdb18cc4ff7110425685551e2fbff21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.944903 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.961575 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22174b11b108be065985e9809bc93e49dcb17e237f5054e2e5771b82e7e42f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:18 crc kubenswrapper[4822]: I0224 09:10:18.980222 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96124054c109e534a338814cf39c5948da05e8ec3dd7e62985d440380310c97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96124054c109e534a338814cf39c5948da05e8ec3dd7e62985d440380310c97a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:10:17Z\\\",\\\"message\\\":\\\"2026-02-24T09:09:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c2d0b039-b7c7-4b61-b7d6-e17e88fd37b8\\\\n2026-02-24T09:09:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c2d0b039-b7c7-4b61-b7d6-e17e88fd37b8 to /host/opt/cni/bin/\\\\n2026-02-24T09:09:32Z [verbose] multus-daemon started\\\\n2026-02-24T09:09:32Z [verbose] Readiness Indicator file check\\\\n2026-02-24T09:10:17Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:19 crc kubenswrapper[4822]: I0224 09:10:19.180769 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqrzq_90b654a4-010b-4a5e-b2d8-d42764fcb628/kube-multus/0.log" Feb 24 09:10:19 crc kubenswrapper[4822]: I0224 09:10:19.180856 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lqrzq" event={"ID":"90b654a4-010b-4a5e-b2d8-d42764fcb628","Type":"ContainerStarted","Data":"81db2617c595292860af147428f1fdcece1db664671dfa8b9d1ad69cf77251a2"} Feb 24 09:10:19 crc kubenswrapper[4822]: I0224 09:10:19.204337 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb999ada-8542-46ce-82c8-e8f469a5bc18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bb23ad458041922bc4ccc692449b952d9aab77f8badb82f1f33355cb82f899c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e084b810bf294f0dc2565fa6199fa484a66a33d2d9a1c2192a71e2a6fbd6cc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30bed1504be2ca00734175abe924a77562966abc782073db4d997ecf4bbb92c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1ef5ebe08827dc25ed17949b250929b0d9e9b5c3a68aad23ca6ba0b71c28462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1ef5ebe08827dc25ed17949b250929b0d9e9b5c3a68aad23ca6ba0b71c28462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:19Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:19 crc kubenswrapper[4822]: I0224 09:10:19.226539 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72cfa68249a7e55b890cfa5284ef4c5e10dae317eeb6be34edc9d45803723e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:19Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:19 crc kubenswrapper[4822]: I0224 09:10:19.244494 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:19Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:19 crc kubenswrapper[4822]: I0224 09:10:19.262202 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:19Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:19 crc kubenswrapper[4822]: I0224 09:10:19.277326 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22174b11b108be065985e9809bc93e49dcb17e237f5054e2e5771b82e7e42f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:19Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:19 crc kubenswrapper[4822]: I0224 09:10:19.293085 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:19Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:19 crc kubenswrapper[4822]: I0224 09:10:19.311426 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714c471f8ec139e9610d9c3e7942cc91fc7c3e09d6b9d83a9498acc804542540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d72e2a80f6b15e7b1f86206da1682dfae29bb1bf1dcca20ca38556ba3fc2c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:19Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:19 crc kubenswrapper[4822]: I0224 09:10:19.317431 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:10:19 crc kubenswrapper[4822]: I0224 09:10:19.317529 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:10:19 crc kubenswrapper[4822]: I0224 09:10:19.317592 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:10:19 crc kubenswrapper[4822]: I0224 09:10:19.317626 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:10:19 crc kubenswrapper[4822]: I0224 09:10:19.317661 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:10:19 crc kubenswrapper[4822]: E0224 09:10:19.317821 4822 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:10:19 crc kubenswrapper[4822]: E0224 09:10:19.317844 4822 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:10:19 crc kubenswrapper[4822]: E0224 09:10:19.317864 4822 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:10:19 crc kubenswrapper[4822]: E0224 09:10:19.317966 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:23.317901339 +0000 UTC m=+205.705663917 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:10:19 crc kubenswrapper[4822]: E0224 09:10:19.318200 4822 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:10:19 crc kubenswrapper[4822]: E0224 09:10:19.318269 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:23.318237589 +0000 UTC m=+205.706000177 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:10:19 crc kubenswrapper[4822]: E0224 09:10:19.318302 4822 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:10:19 crc kubenswrapper[4822]: E0224 09:10:19.318305 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:23.31829263 +0000 UTC m=+205.706055208 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:10:19 crc kubenswrapper[4822]: E0224 09:10:19.318208 4822 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:10:19 crc kubenswrapper[4822]: E0224 09:10:19.318369 4822 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:10:19 crc kubenswrapper[4822]: E0224 09:10:19.318384 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:23.318361582 +0000 UTC m=+205.706124210 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:10:19 crc kubenswrapper[4822]: E0224 09:10:19.318391 4822 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:10:19 crc kubenswrapper[4822]: E0224 09:10:19.318448 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:23.318435894 +0000 UTC m=+205.706198472 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:10:19 crc kubenswrapper[4822]: I0224 09:10:19.327700 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7acc17a2035f017ceaa15e39eb12606e38a5902fdc87c6671ac2628ebaa4a4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:19Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:19 crc kubenswrapper[4822]: I0224 09:10:19.336599 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:10:19 crc kubenswrapper[4822]: I0224 09:10:19.336613 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:10:19 crc kubenswrapper[4822]: I0224 09:10:19.336676 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:10:19 crc kubenswrapper[4822]: E0224 09:10:19.337163 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:10:19 crc kubenswrapper[4822]: E0224 09:10:19.337239 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:10:19 crc kubenswrapper[4822]: I0224 09:10:19.336754 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:10:19 crc kubenswrapper[4822]: E0224 09:10:19.337354 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:10:19 crc kubenswrapper[4822]: E0224 09:10:19.336986 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:10:19 crc kubenswrapper[4822]: I0224 09:10:19.350502 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2f6d18-d7b6-4c05-a90c-e6bf83a58862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708b7e3ba090c334bf7618665cd0f95d512059d204892851f4005e741b29aedf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:09:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:09:01.996048 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:09:01.996167 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:09:01.997001 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3601362715/tls.crt::/tmp/serving-cert-3601362715/tls.key\\\\\\\"\\\\nI0224 09:09:02.405618 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:09:02.409894 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:09:02.409954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:09:02.410013 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:09:02.410028 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:09:02.418769 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:09:02.418799 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:09:02.418814 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418828 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418838 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:09:02.418846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:09:02.418853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:09:02.418860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:09:02.423036 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:19Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:19 crc kubenswrapper[4822]: I0224 09:10:19.367472 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffd53620-3610-479d-8061-abfe62314da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bdde25238c492704d6d80e1023ad200b9fd1bd5d319b516ccc24b2aeea4fd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aabc4776b2fe2c18694894241153d4af330a61847bbb0ca4e8554c9526112c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aabc4776b2fe2c18694894241153d4af330a61847bbb0ca4e8554c9526112c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:19Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:19 crc kubenswrapper[4822]: I0224 09:10:19.391550 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888ca80a64aca876c966570ab2bca27d2f1d4e80fe42443a170382671bf5a002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a3802aceae54e33e3c19d855be2bd17cdb18cc4ff7110425685551e2fbff21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:19Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:19 crc kubenswrapper[4822]: I0224 09:10:19.412651 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81db2617c595292860af147428f1fdcece1db664671dfa8b9d1ad69cf77251a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96124054c109e534a338814cf39c5948da05e8ec3dd7e62985d440380310c97a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:10:17Z\\\",\\\"message\\\":\\\"2026-02-24T09:09:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c2d0b039-b7c7-4b61-b7d6-e17e88fd37b8\\\\n2026-02-24T09:09:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c2d0b039-b7c7-4b61-b7d6-e17e88fd37b8 to /host/opt/cni/bin/\\\\n2026-02-24T09:09:32Z [verbose] multus-daemon started\\\\n2026-02-24T09:09:32Z [verbose] Readiness Indicator file check\\\\n2026-02-24T09:10:17Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:19Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:19 crc kubenswrapper[4822]: I0224 09:10:19.431044 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4525470935b533581c4bf2d12d8cc18425b8fc363f2b501ab7773882832f112d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:19Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:19 crc kubenswrapper[4822]: I0224 09:10:19.448867 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://576c14462c61f59afdca39fedf2d6ddd7b7c3a779afcc6edbeab01c2fad8c6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d090a61be59e6ddbda21197c89b0a80c4dc7ef91c5cb3e3c589c5d21b7018bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:19Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:19 crc kubenswrapper[4822]: I0224 09:10:19.479494 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://421aaf1e60edabf5e78959804ebeaf16249eb1a54982539cf787962cae2e3ad3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://421aaf1e60edabf5e78959804ebeaf16249eb1a54982539cf787962cae2e3ad3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:09:56Z\\\",\\\"message\\\":\\\"top retrying failed objects of type *v1.Namespace\\\\nI0224 09:09:56.344555 6843 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 09:09:56.344017 6843 egressqos.go:301] Shutting down EgressQoS controller\\\\nI0224 09:09:56.344068 6843 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *factory.egressNode\\\\nI0224 09:09:56.344830 6843 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 09:09:56.344833 6843 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI0224 09:09:56.344833 6843 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 09:09:56.344841 6843 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0224 09:09:56.344835 6843 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0224 09:09:56.344852 6843 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 09:09:56.344833 6843 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 09:09:56.344991 6843 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0224 09:09:56.345011 6843 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0224 09:09:56.345033 6843 factory.go:656] Stopping watch factory\\\\nI0224 09:09:56.345055 6843 ovnkube.go:599] Stopped ovnkube\\\\nI0224 09:09:56.345090 6843 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0224 09\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-669bp_openshift-ovn-kubernetes(72f416e6-5647-4b65-b06f-df73aca5e594)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:19Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:19 crc kubenswrapper[4822]: I0224 09:10:19.502716 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1151d10473d8bce326113fb334f2863e8577b504973f59405b67008b2cd06581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:19Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:19 crc kubenswrapper[4822]: I0224 09:10:19.519873 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f51aff12-328f-4b79-8dbb-2079510f45dc-metrics-certs\") pod \"network-metrics-daemon-htbq4\" (UID: \"f51aff12-328f-4b79-8dbb-2079510f45dc\") " pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:10:19 crc kubenswrapper[4822]: E0224 09:10:19.520103 4822 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:10:19 crc kubenswrapper[4822]: E0224 09:10:19.520208 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f51aff12-328f-4b79-8dbb-2079510f45dc-metrics-certs podName:f51aff12-328f-4b79-8dbb-2079510f45dc nodeName:}" failed. No retries permitted until 2026-02-24 09:11:23.520173621 +0000 UTC m=+205.907936219 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f51aff12-328f-4b79-8dbb-2079510f45dc-metrics-certs") pod "network-metrics-daemon-htbq4" (UID: "f51aff12-328f-4b79-8dbb-2079510f45dc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:10:19 crc kubenswrapper[4822]: I0224 09:10:19.534858 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba152bf-73f0-4fbd-9814-864ca1f28e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f0ebb59005449668c1b492bdc6155afb2b414ca3abca8e05e2b43aa59d23886\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9078ca880a9ab26a4423effb527cfd1cb2c5288403daf02462f1286ddac0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba71fec71e1dff40dd70be718d077ad208ca4aa83726e186ddfee661bcd468a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4764701e648ba7fc6285bf69cac5297af5a21b2e2b3904237e3afe0fd049b1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6240de59556b2c3fd8e26937945a53c5f216dce135f0b0024c1fd8ba8b1fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bf9901f8a5e1a52726573e43d614a77c1305b59426fead5d4f15e27c3ef06d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3bf9901f8a5e1a52726573e43d614a77c1305b59426fead5d4f15e27c3ef06d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b9168effe46ebec3888f923da1ee54e392258872f44ffabc71f5ec52ef4d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6b9168effe46ebec3888f923da1ee54e392258872f44ffabc71f5ec52ef4d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f0d6ba901fd909408637d654debaf0ccb14392ed6d2fa294e8ffb443f7a9ef80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d6ba901fd909408637d654debaf0ccb14392ed6d2fa294e8ffb443f7a9ef80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:19Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:19 crc kubenswrapper[4822]: I0224 09:10:19.554810 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b20f634d-42fc-410d-bd88-f8a5217a5100\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049e050ef9f1991307128e875cc6a4e29240a5e7c14c7e35686cef5ea40075d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://161dc5372c480c4c6401e2a366da2472613d6e1b70e07ed5d2b13e605149b71b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:08:24Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 09:08:00.651277 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 09:08:00.655420 1 observer_polling.go:159] Starting file observer\\\\nI0224 09:08:00.692273 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 09:08:00.698616 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 09:08:24.530014 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 09:08:24.530093 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:24Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd8556c3d55934657eb33e924c817164ec4c4ee3daf720283bd5d5a5eb849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5119b48d4016d083d02c6ccdb47e89a0d6b90e9907e9d21c0a37236be086453\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1dc0ba5c502740227408242b13ac68bf465d12f051a771295d69c245bc2b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:19Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:19 crc kubenswrapper[4822]: I0224 09:10:19.575868 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:19Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:21 crc kubenswrapper[4822]: I0224 09:10:21.337266 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:10:21 crc kubenswrapper[4822]: I0224 09:10:21.337293 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:10:21 crc kubenswrapper[4822]: E0224 09:10:21.337483 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:10:21 crc kubenswrapper[4822]: I0224 09:10:21.337567 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:10:21 crc kubenswrapper[4822]: E0224 09:10:21.337731 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:10:21 crc kubenswrapper[4822]: E0224 09:10:21.337885 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:10:21 crc kubenswrapper[4822]: I0224 09:10:21.338803 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:10:21 crc kubenswrapper[4822]: E0224 09:10:21.339024 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:10:21 crc kubenswrapper[4822]: I0224 09:10:21.369734 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:21 crc kubenswrapper[4822]: I0224 09:10:21.369818 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:21 crc kubenswrapper[4822]: I0224 09:10:21.369832 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:21 crc kubenswrapper[4822]: I0224 09:10:21.369863 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:10:21 crc kubenswrapper[4822]: I0224 09:10:21.369879 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:10:21Z","lastTransitionTime":"2026-02-24T09:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:10:21 crc kubenswrapper[4822]: E0224 09:10:21.386402 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:21Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:21 crc kubenswrapper[4822]: I0224 09:10:21.392646 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:21 crc kubenswrapper[4822]: I0224 09:10:21.392707 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:21 crc kubenswrapper[4822]: I0224 09:10:21.392734 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:21 crc kubenswrapper[4822]: I0224 09:10:21.392766 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:10:21 crc kubenswrapper[4822]: I0224 09:10:21.392790 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:10:21Z","lastTransitionTime":"2026-02-24T09:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:10:21 crc kubenswrapper[4822]: E0224 09:10:21.417145 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:21Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:21 crc kubenswrapper[4822]: I0224 09:10:21.421656 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:21 crc kubenswrapper[4822]: I0224 09:10:21.421728 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:21 crc kubenswrapper[4822]: I0224 09:10:21.421747 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:21 crc kubenswrapper[4822]: I0224 09:10:21.421773 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:10:21 crc kubenswrapper[4822]: I0224 09:10:21.421790 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:10:21Z","lastTransitionTime":"2026-02-24T09:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:10:21 crc kubenswrapper[4822]: E0224 09:10:21.436434 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:21Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:21 crc kubenswrapper[4822]: I0224 09:10:21.441114 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:21 crc kubenswrapper[4822]: I0224 09:10:21.441249 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:21 crc kubenswrapper[4822]: I0224 09:10:21.441274 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:21 crc kubenswrapper[4822]: I0224 09:10:21.441303 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:10:21 crc kubenswrapper[4822]: I0224 09:10:21.441328 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:10:21Z","lastTransitionTime":"2026-02-24T09:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:10:21 crc kubenswrapper[4822]: E0224 09:10:21.459335 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:21Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:21 crc kubenswrapper[4822]: I0224 09:10:21.463731 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:21 crc kubenswrapper[4822]: I0224 09:10:21.463799 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:21 crc kubenswrapper[4822]: I0224 09:10:21.463823 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:21 crc kubenswrapper[4822]: I0224 09:10:21.463853 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:10:21 crc kubenswrapper[4822]: I0224 09:10:21.463877 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:10:21Z","lastTransitionTime":"2026-02-24T09:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:10:21 crc kubenswrapper[4822]: E0224 09:10:21.480832 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:21Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:21 crc kubenswrapper[4822]: E0224 09:10:21.480977 4822 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 09:10:23 crc kubenswrapper[4822]: I0224 09:10:23.337209 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:10:23 crc kubenswrapper[4822]: I0224 09:10:23.337275 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:10:23 crc kubenswrapper[4822]: I0224 09:10:23.337325 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:10:23 crc kubenswrapper[4822]: I0224 09:10:23.337229 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:10:23 crc kubenswrapper[4822]: E0224 09:10:23.337439 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:10:23 crc kubenswrapper[4822]: E0224 09:10:23.337636 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:10:23 crc kubenswrapper[4822]: E0224 09:10:23.337717 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:10:23 crc kubenswrapper[4822]: E0224 09:10:23.337791 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:10:23 crc kubenswrapper[4822]: I0224 09:10:23.338892 4822 scope.go:117] "RemoveContainer" containerID="421aaf1e60edabf5e78959804ebeaf16249eb1a54982539cf787962cae2e3ad3" Feb 24 09:10:23 crc kubenswrapper[4822]: E0224 09:10:23.508638 4822 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 09:10:24 crc kubenswrapper[4822]: I0224 09:10:24.202278 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-669bp_72f416e6-5647-4b65-b06f-df73aca5e594/ovnkube-controller/2.log" Feb 24 09:10:24 crc kubenswrapper[4822]: I0224 09:10:24.205491 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" event={"ID":"72f416e6-5647-4b65-b06f-df73aca5e594","Type":"ContainerStarted","Data":"43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6"} Feb 24 09:10:24 crc kubenswrapper[4822]: I0224 09:10:24.206225 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:10:24 crc kubenswrapper[4822]: I0224 09:10:24.222771 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://576c14462c61f59afdca39fedf2d6ddd7b7c3a779afcc6edbeab01c2fad8c6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d090a61be59e6ddbda21197c89b0a80c4dc7ef91c5cb3e3c589c5d21b7018bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:24Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:24 crc kubenswrapper[4822]: I0224 09:10:24.253970 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://421aaf1e60edabf5e78959804ebeaf16249eb1a54982539cf787962cae2e3ad3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:09:56Z\\\",\\\"message\\\":\\\"top retrying failed objects of type *v1.Namespace\\\\nI0224 09:09:56.344555 6843 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 09:09:56.344017 6843 egressqos.go:301] Shutting down EgressQoS controller\\\\nI0224 09:09:56.344068 6843 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *factory.egressNode\\\\nI0224 09:09:56.344830 6843 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 09:09:56.344833 6843 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI0224 09:09:56.344833 6843 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 09:09:56.344841 6843 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0224 09:09:56.344835 6843 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0224 09:09:56.344852 6843 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 09:09:56.344833 6843 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 09:09:56.344991 6843 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0224 09:09:56.345011 6843 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0224 09:09:56.345033 6843 factory.go:656] Stopping watch factory\\\\nI0224 09:09:56.345055 6843 ovnkube.go:599] Stopped ovnkube\\\\nI0224 09:09:56.345090 6843 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0224 09\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:24Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:24 crc kubenswrapper[4822]: I0224 09:10:24.275709 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1151d10473d8bce326113fb334f2863e8577b504973f59405b67008b2cd06581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:24Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:24 crc kubenswrapper[4822]: I0224 09:10:24.301477 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba152bf-73f0-4fbd-9814-864ca1f28e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f0ebb59005449668c1b492bdc6155afb2b414ca3abca8e05e2b43aa59d23886\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9078ca880a9ab26a4423effb527cfd1cb2c5288403daf02462f1286ddac0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba71fec71e1dff40dd70be718d077ad208ca4aa83726e186ddfee661bcd468a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4764701e648ba7fc6285bf69cac5297af5a21b2e2b3904237e3afe0fd049b1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6240de59556b2c3fd8e26937945a53c5f216dce135f0b0024c1fd8ba8b1fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bf9901f8a5e1a52726573e43d614a77c1305b59426fead5d4f15e27c3ef06d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3bf9901f8a5e1a52726573e43d614a77c1305b59426fead5d4f15e27c3ef06d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b9168effe46ebec3888f923da1ee54e392258872f44ffabc71f5ec52ef4d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6b9168effe46ebec3888f923da1ee54e392258872f44ffabc71f5ec52ef4d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f0d6ba901fd909408637d654debaf0ccb14392ed6d2fa294e8ffb443f7a9ef80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d6ba901fd909408637d654debaf0ccb14392ed6d2fa294e8ffb443f7a9ef80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:24Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:24 crc kubenswrapper[4822]: I0224 09:10:24.319767 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b20f634d-42fc-410d-bd88-f8a5217a5100\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049e050ef9f1991307128e875cc6a4e29240a5e7c14c7e35686cef5ea40075d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://161dc5372c480c4c6401e2a366da2472613d6e1b70e07ed5d2b13e605149b71b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:08:24Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 09:08:00.651277 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 09:08:00.655420 1 observer_polling.go:159] Starting file observer\\\\nI0224 09:08:00.692273 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 09:08:00.698616 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 09:08:24.530014 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 09:08:24.530093 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:24Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd8556c3d55934657eb33e924c817164ec4c4ee3daf720283bd5d5a5eb849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5119b48d4016d083d02c6ccdb47e89a0d6b90e9907e9d21c0a37236be086453\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1dc0ba5c502740227408242b13ac68bf465d12f051a771295d69c245bc2b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:24Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:24 crc kubenswrapper[4822]: I0224 09:10:24.337069 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:24Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:24 crc kubenswrapper[4822]: I0224 09:10:24.353468 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4525470935b533581c4bf2d12d8cc18425b8fc363f2b501ab7773882832f112d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:24Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:24 crc kubenswrapper[4822]: I0224 09:10:24.377639 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb999ada-8542-46ce-82c8-e8f469a5bc18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bb23ad458041922bc4ccc692449b952d9aab77f8badb82f1f33355cb82f899c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e084b810bf294f0dc2565fa6199fa484a66a33d2d9a1c2192a71e2a6fbd6cc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30bed1504be2ca00734175abe924a77562966abc782073db4d997ecf4bbb92c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1ef5ebe08827dc25ed17949b250929b0d9e9b5c3a68aad23ca6ba0b71c28462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1ef5ebe08827dc25ed17949b250929b0d9e9b5c3a68aad23ca6ba0b71c28462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:24Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:24 crc kubenswrapper[4822]: I0224 09:10:24.396026 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72cfa68249a7e55b890cfa5284ef4c5e10dae317eeb6be34edc9d45803723e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:24Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:24 crc kubenswrapper[4822]: I0224 09:10:24.413195 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:24Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:24 crc kubenswrapper[4822]: I0224 09:10:24.430147 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22174b11b108be065985e9809bc93e49dcb17e237f5054e2e5771b82e7e42f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:24Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:24 crc kubenswrapper[4822]: I0224 09:10:24.451187 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:24Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:24 crc kubenswrapper[4822]: I0224 09:10:24.466707 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714c471f8ec139e9610d9c3e7942cc91fc7c3e09d6b9d83a9498acc804542540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d72e2a80f6b15e7b1f86206da1682dfae29bb1bf1dcca20ca38556ba3fc2c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:24Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:24 crc kubenswrapper[4822]: I0224 09:10:24.485317 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7acc17a2035f017ceaa15e39eb12606e38a5902fdc87c6671ac2628ebaa4a4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:24Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:24 crc kubenswrapper[4822]: I0224 09:10:24.510425 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2f6d18-d7b6-4c05-a90c-e6bf83a58862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708b7e3ba090c334bf7618665cd0f95d512059d204892851f4005e741b29aedf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:09:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:09:01.996048 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:09:01.996167 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:09:01.997001 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3601362715/tls.crt::/tmp/serving-cert-3601362715/tls.key\\\\\\\"\\\\nI0224 09:09:02.405618 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:09:02.409894 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:09:02.409954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:09:02.410013 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:09:02.410028 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:09:02.418769 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:09:02.418799 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:09:02.418814 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418828 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418838 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:09:02.418846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:09:02.418853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:09:02.418860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:09:02.423036 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:24Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:24 crc kubenswrapper[4822]: I0224 09:10:24.525235 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffd53620-3610-479d-8061-abfe62314da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bdde25238c492704d6d80e1023ad200b9fd1bd5d319b516ccc24b2aeea4fd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aabc4776b2fe2c18694894241153d4af330a61847bbb0ca4e8554c9526112c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aabc4776b2fe2c18694894241153d4af330a61847bbb0ca4e8554c9526112c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:24Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:24 crc kubenswrapper[4822]: I0224 09:10:24.540121 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888ca80a64aca876c966570ab2bca27d2f1d4e80fe42443a170382671bf5a002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a3802aceae54e33e3c19d855be2bd17cdb18cc4ff7110425685551e2fbff21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:24Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:24 crc kubenswrapper[4822]: I0224 09:10:24.554178 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:24Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:24 crc kubenswrapper[4822]: I0224 09:10:24.569660 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81db2617c595292860af147428f1fdcece1db664671dfa8b9d1ad69cf77251a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96124054c109e534a338814cf39c5948da05e8ec3dd7e62985d440380310c97a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:10:17Z\\\",\\\"message\\\":\\\"2026-02-24T09:09:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c2d0b039-b7c7-4b61-b7d6-e17e88fd37b8\\\\n2026-02-24T09:09:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c2d0b039-b7c7-4b61-b7d6-e17e88fd37b8 to /host/opt/cni/bin/\\\\n2026-02-24T09:09:32Z [verbose] multus-daemon started\\\\n2026-02-24T09:09:32Z [verbose] Readiness Indicator file check\\\\n2026-02-24T09:10:17Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:24Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:25 crc kubenswrapper[4822]: I0224 09:10:25.212445 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-669bp_72f416e6-5647-4b65-b06f-df73aca5e594/ovnkube-controller/3.log" Feb 24 09:10:25 crc kubenswrapper[4822]: I0224 09:10:25.213404 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-669bp_72f416e6-5647-4b65-b06f-df73aca5e594/ovnkube-controller/2.log" Feb 24 09:10:25 crc kubenswrapper[4822]: I0224 09:10:25.217657 4822 generic.go:334] "Generic (PLEG): container finished" podID="72f416e6-5647-4b65-b06f-df73aca5e594" containerID="43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6" exitCode=1 Feb 24 09:10:25 crc kubenswrapper[4822]: I0224 09:10:25.217740 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" event={"ID":"72f416e6-5647-4b65-b06f-df73aca5e594","Type":"ContainerDied","Data":"43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6"} Feb 24 09:10:25 crc kubenswrapper[4822]: I0224 09:10:25.217822 4822 scope.go:117] "RemoveContainer" containerID="421aaf1e60edabf5e78959804ebeaf16249eb1a54982539cf787962cae2e3ad3" Feb 24 09:10:25 crc kubenswrapper[4822]: I0224 09:10:25.219119 4822 scope.go:117] "RemoveContainer" containerID="43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6" Feb 24 09:10:25 crc kubenswrapper[4822]: E0224 09:10:25.219451 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-669bp_openshift-ovn-kubernetes(72f416e6-5647-4b65-b06f-df73aca5e594)\"" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" Feb 24 09:10:25 crc kubenswrapper[4822]: I0224 09:10:25.237044 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22174b11b108be065985e9809bc93e49dcb17e237f5054e2e5771b82e7e42f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:25Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:25 crc kubenswrapper[4822]: I0224 09:10:25.254558 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:25Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:25 crc kubenswrapper[4822]: I0224 09:10:25.272558 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714c471f8ec139e9610d9c3e7942cc91fc7c3e09d6b9d83a9498acc804542540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d72e2a80f6b15e7b1f86206da1682dfae29bb1bf1dcca20ca38556ba3fc2c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:25Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:25 crc kubenswrapper[4822]: I0224 09:10:25.289481 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7acc17a2035f017ceaa15e39eb12606e38a5902fdc87c6671ac2628ebaa4a4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:25Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:25 crc kubenswrapper[4822]: I0224 09:10:25.313194 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2f6d18-d7b6-4c05-a90c-e6bf83a58862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708b7e3ba090c334bf7618665cd0f95d512059d204892851f4005e741b29aedf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:09:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:09:01.996048 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:09:01.996167 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:09:01.997001 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3601362715/tls.crt::/tmp/serving-cert-3601362715/tls.key\\\\\\\"\\\\nI0224 09:09:02.405618 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:09:02.409894 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:09:02.409954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:09:02.410013 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:09:02.410028 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:09:02.418769 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:09:02.418799 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:09:02.418814 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418828 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418838 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:09:02.418846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:09:02.418853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:09:02.418860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:09:02.423036 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:25Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:25 crc kubenswrapper[4822]: I0224 09:10:25.329847 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffd53620-3610-479d-8061-abfe62314da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bdde25238c492704d6d80e1023ad200b9fd1bd5d319b516ccc24b2aeea4fd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aabc4776b2fe2c18694894241153d4af330a61847bbb0ca4e8554c9526112c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aabc4776b2fe2c18694894241153d4af330a61847bbb0ca4e8554c9526112c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:25Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:25 crc kubenswrapper[4822]: I0224 09:10:25.336621 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:10:25 crc kubenswrapper[4822]: I0224 09:10:25.336621 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:10:25 crc kubenswrapper[4822]: E0224 09:10:25.336799 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:10:25 crc kubenswrapper[4822]: I0224 09:10:25.336874 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:10:25 crc kubenswrapper[4822]: E0224 09:10:25.337004 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:10:25 crc kubenswrapper[4822]: I0224 09:10:25.336772 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:10:25 crc kubenswrapper[4822]: E0224 09:10:25.337272 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:10:25 crc kubenswrapper[4822]: E0224 09:10:25.337331 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:10:25 crc kubenswrapper[4822]: I0224 09:10:25.350890 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888ca80a64aca876c966570ab2bca27d2f1d4e80fe42443a170382671bf5a002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a3802aceae54e33e3c19d855be2bd17cdb18cc4ff7110425685551e2fbff21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:25Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:25 crc kubenswrapper[4822]: I0224 09:10:25.371772 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:25Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:25 crc kubenswrapper[4822]: I0224 09:10:25.392422 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81db2617c595292860af147428f1fdcece1db664671dfa8b9d1ad69cf77251a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96124054c109e534a338814cf39c5948da05e8ec3dd7e62985d440380310c97a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:10:17Z\\\",\\\"message\\\":\\\"2026-02-24T09:09:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c2d0b039-b7c7-4b61-b7d6-e17e88fd37b8\\\\n2026-02-24T09:09:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c2d0b039-b7c7-4b61-b7d6-e17e88fd37b8 to /host/opt/cni/bin/\\\\n2026-02-24T09:09:32Z [verbose] multus-daemon started\\\\n2026-02-24T09:09:32Z [verbose] Readiness Indicator file check\\\\n2026-02-24T09:10:17Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:25Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:25 crc kubenswrapper[4822]: I0224 09:10:25.411005 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://576c14462c61f59afdca39fedf2d6ddd7b7c3a779afcc6edbeab01c2fad8c6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d090a61be59e6ddbda21197c89b0a80c4dc7ef91c5cb3e3c589c5d21b7018bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:25Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:25 crc kubenswrapper[4822]: I0224 09:10:25.441116 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://421aaf1e60edabf5e78959804ebeaf16249eb1a54982539cf787962cae2e3ad3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:09:56Z\\\",\\\"message\\\":\\\"top retrying failed objects of type *v1.Namespace\\\\nI0224 09:09:56.344555 6843 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 09:09:56.344017 6843 egressqos.go:301] Shutting down EgressQoS controller\\\\nI0224 09:09:56.344068 6843 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *factory.egressNode\\\\nI0224 09:09:56.344830 6843 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 09:09:56.344833 6843 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI0224 09:09:56.344833 6843 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 09:09:56.344841 6843 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0224 09:09:56.344835 6843 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0224 09:09:56.344852 6843 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 09:09:56.344833 6843 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 09:09:56.344991 6843 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0224 09:09:56.345011 6843 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0224 09:09:56.345033 6843 factory.go:656] Stopping watch factory\\\\nI0224 09:09:56.345055 6843 ovnkube.go:599] Stopped ovnkube\\\\nI0224 09:09:56.345090 6843 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0224 09\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:10:24Z\\\",\\\"message\\\":\\\"org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}\\\\nI0224 09:10:24.454347 7187 services_controller.go:360] Finished syncing service router-internal-default on namespace openshift-ingress for network=default : 1.392471ms\\\\nI0224 09:10:24.454373 7187 factory.go:1336] Added *v1.Node event handler 7\\\\nI0224 09:10:24.454405 7187 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0224 09:10:24.454525 7187 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0224 09:10:24.454560 7187 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 09:10:24.454655 7187 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0224 09:10:24.454658 7187 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 09:10:24.454695 7187 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 09:10:24.454700 7187 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0224 09:10:24.454719 7187 factory.go:656] Stopping watch factory\\\\nI0224 09:10:24.454750 7187 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0224 09:10:24.454778 7187 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0224 09:10:24.454810 7187 ovnkube.go:599] Stopped ovnkube\\\\nI0224 09:10:24.454834 7187 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0224 09:10:24.454896 7187 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:25Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:25 crc kubenswrapper[4822]: I0224 09:10:25.465545 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1151d10473d8bce326113fb334f2863e8577b504973f59405b67008b2cd06581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:25Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:25 crc kubenswrapper[4822]: I0224 09:10:25.499691 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba152bf-73f0-4fbd-9814-864ca1f28e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f0ebb59005449668c1b492bdc6155afb2b414ca3abca8e05e2b43aa59d23886\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9078ca880a9ab26a4423effb527cfd1cb2c5288403daf02462f1286ddac0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba71fec71e1dff40dd70be718d077ad208ca4aa83726e186ddfee661bcd468a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4764701e648ba7fc6285bf69cac5297af5a21b2e2b3904237e3afe0fd049b1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6240de59556b2c3fd8e26937945a53c5f216dce135f0b0024c1fd8ba8b1fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bf9901f8a5e1a52726573e43d614a77c1305b59426fead5d4f15e27c3ef06d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3bf9901f8a5e1a52726573e43d614a77c1305b59426fead5d4f15e27c3ef06d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b9168effe46ebec3888f923da1ee54e392258872f44ffabc71f5ec52ef4d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6b9168effe46ebec3888f923da1ee54e392258872f44ffabc71f5ec52ef4d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f0d6ba901fd909408637d654debaf0ccb14392ed6d2fa294e8ffb443f7a9ef80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d6ba901fd909408637d654debaf0ccb14392ed6d2fa294e8ffb443f7a9ef80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:25Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:25 crc kubenswrapper[4822]: I0224 09:10:25.549993 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b20f634d-42fc-410d-bd88-f8a5217a5100\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049e050ef9f1991307128e875cc6a4e29240a5e7c14c7e35686cef5ea40075d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://161dc5372c480c4c6401e2a366da2472613d6e1b70e07ed5d2b13e605149b71b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:08:24Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 09:08:00.651277 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 09:08:00.655420 1 observer_polling.go:159] Starting file observer\\\\nI0224 09:08:00.692273 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 09:08:00.698616 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 09:08:24.530014 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 09:08:24.530093 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:24Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd8556c3d55934657eb33e924c817164ec4c4ee3daf720283bd5d5a5eb849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5119b48d4016d083d02c6ccdb47e89a0d6b90e9907e9d21c0a37236be086453\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1dc0ba5c502740227408242b13ac68bf465d12f051a771295d69c245bc2b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:25Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:25 crc kubenswrapper[4822]: I0224 09:10:25.568899 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:25Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:25 crc kubenswrapper[4822]: I0224 09:10:25.586963 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4525470935b533581c4bf2d12d8cc18425b8fc363f2b501ab7773882832f112d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:25Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:25 crc kubenswrapper[4822]: I0224 09:10:25.602478 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb999ada-8542-46ce-82c8-e8f469a5bc18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bb23ad458041922bc4ccc692449b952d9aab77f8badb82f1f33355cb82f899c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e084b810bf294f0dc2565fa6199fa484a66a33d2d9a1c2192a71e2a6fbd6cc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30bed1504be2ca00734175abe924a77562966abc782073db4d997ecf4bbb92c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1ef5ebe08827dc25ed17949b250929b0d9e9b5c3a68aad23ca6ba0b71c28462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1ef5ebe08827dc25ed17949b250929b0d9e9b5c3a68aad23ca6ba0b71c28462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:25Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:25 crc kubenswrapper[4822]: I0224 09:10:25.620057 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72cfa68249a7e55b890cfa5284ef4c5e10dae317eeb6be34edc9d45803723e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:25Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:25 crc kubenswrapper[4822]: I0224 09:10:25.639147 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:25Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:26 crc kubenswrapper[4822]: I0224 09:10:26.228118 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-669bp_72f416e6-5647-4b65-b06f-df73aca5e594/ovnkube-controller/3.log" Feb 24 09:10:26 crc kubenswrapper[4822]: I0224 09:10:26.232901 4822 scope.go:117] "RemoveContainer" containerID="43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6" Feb 24 09:10:26 crc kubenswrapper[4822]: E0224 09:10:26.233166 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-669bp_openshift-ovn-kubernetes(72f416e6-5647-4b65-b06f-df73aca5e594)\"" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" Feb 24 09:10:26 crc kubenswrapper[4822]: I0224 09:10:26.251712 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4525470935b533581c4bf2d12d8cc18425b8fc363f2b501ab7773882832f112d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:26 crc kubenswrapper[4822]: I0224 09:10:26.269801 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://576c14462c61f59afdca39fedf2d6ddd7b7c3a779afcc6edbeab01c2fad8c6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d090a61be59e6ddbda21197c89b0a80c4dc7ef91c5cb3e3c589c5d21b7018bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:26 crc kubenswrapper[4822]: I0224 09:10:26.303291 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:10:24Z\\\",\\\"message\\\":\\\"org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}\\\\nI0224 09:10:24.454347 7187 services_controller.go:360] Finished syncing service router-internal-default on namespace openshift-ingress for network=default : 1.392471ms\\\\nI0224 09:10:24.454373 7187 factory.go:1336] Added *v1.Node event handler 7\\\\nI0224 09:10:24.454405 7187 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0224 09:10:24.454525 7187 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0224 09:10:24.454560 7187 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 09:10:24.454655 7187 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0224 09:10:24.454658 7187 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 09:10:24.454695 7187 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 09:10:24.454700 7187 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0224 09:10:24.454719 7187 factory.go:656] Stopping watch factory\\\\nI0224 09:10:24.454750 7187 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0224 09:10:24.454778 7187 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0224 09:10:24.454810 7187 ovnkube.go:599] Stopped ovnkube\\\\nI0224 09:10:24.454834 7187 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0224 09:10:24.454896 7187 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:10:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-669bp_openshift-ovn-kubernetes(72f416e6-5647-4b65-b06f-df73aca5e594)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:26 crc kubenswrapper[4822]: I0224 09:10:26.325785 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1151d10473d8bce326113fb334f2863e8577b504973f59405b67008b2cd06581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:26 crc kubenswrapper[4822]: I0224 09:10:26.350136 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba152bf-73f0-4fbd-9814-864ca1f28e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f0ebb59005449668c1b492bdc6155afb2b414ca3abca8e05e2b43aa59d23886\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9078ca880a9ab26a4423effb527cfd1cb2c5288403daf02462f1286ddac0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba71fec71e1dff40dd70be718d077ad208ca4aa83726e186ddfee661bcd468a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4764701e648ba7fc6285bf69cac5297af5a21b2e2b3904237e3afe0fd049b1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6240de59556b2c3fd8e26937945a53c5f216dce135f0b0024c1fd8ba8b1fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bf9901f8a5e1a52726573e43d614a77c1305b59426fead5d4f15e27c3ef06d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3bf9901f8a5e1a52726573e43d614a77c1305b59426fead5d4f15e27c3ef06d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b9168effe46ebec3888f923da1ee54e392258872f44ffabc71f5ec52ef4d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6b9168effe46ebec3888f923da1ee54e392258872f44ffabc71f5ec52ef4d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f0d6ba901fd909408637d654debaf0ccb14392ed6d2fa294e8ffb443f7a9ef80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d6ba901fd909408637d654debaf0ccb14392ed6d2fa294e8ffb443f7a9ef80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:26 crc kubenswrapper[4822]: I0224 09:10:26.374265 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b20f634d-42fc-410d-bd88-f8a5217a5100\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049e050ef9f1991307128e875cc6a4e29240a5e7c14c7e35686cef5ea40075d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://161dc5372c480c4c6401e2a366da2472613d6e1b70e07ed5d2b13e605149b71b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:08:24Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 09:08:00.651277 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 09:08:00.655420 1 observer_polling.go:159] Starting file observer\\\\nI0224 09:08:00.692273 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 09:08:00.698616 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 09:08:24.530014 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 09:08:24.530093 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:24Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd8556c3d55934657eb33e924c817164ec4c4ee3daf720283bd5d5a5eb849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5119b48d4016d083d02c6ccdb47e89a0d6b90e9907e9d21c0a37236be086453\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1dc0ba5c502740227408242b13ac68bf465d12f051a771295d69c245bc2b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:26 crc kubenswrapper[4822]: I0224 09:10:26.394079 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:26 crc kubenswrapper[4822]: I0224 09:10:26.412572 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb999ada-8542-46ce-82c8-e8f469a5bc18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bb23ad458041922bc4ccc692449b952d9aab77f8badb82f1f33355cb82f899c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e084b810bf294f0dc2565fa6199fa484a66a33d2d9a1c2192a71e2a6fbd6cc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30bed1504be2ca00734175abe924a77562966abc782073db4d997ecf4bbb92c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1ef5ebe08827dc25ed17949b250929b0d9e9b5c3a68aad23ca6ba0b71c28462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1ef5ebe08827dc25ed17949b250929b0d9e9b5c3a68aad23ca6ba0b71c28462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:26 crc kubenswrapper[4822]: I0224 09:10:26.432091 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72cfa68249a7e55b890cfa5284ef4c5e10dae317eeb6be34edc9d45803723e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:26 crc kubenswrapper[4822]: I0224 09:10:26.450821 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:26 crc kubenswrapper[4822]: I0224 09:10:26.467745 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:26 crc kubenswrapper[4822]: I0224 09:10:26.481440 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22174b11b108be065985e9809bc93e49dcb17e237f5054e2e5771b82e7e42f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:26 crc kubenswrapper[4822]: I0224 09:10:26.495793 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:26 crc kubenswrapper[4822]: I0224 09:10:26.512279 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714c471f8ec139e9610d9c3e7942cc91fc7c3e09d6b9d83a9498acc804542540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d72e2a80f6b15e7b1f86206da1682dfae29bb1bf1dcca20ca38556ba3fc2c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:26 crc kubenswrapper[4822]: I0224 09:10:26.526049 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7acc17a2035f017ceaa15e39eb12606e38a5902fdc87c6671ac2628ebaa4a4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:26 crc kubenswrapper[4822]: I0224 09:10:26.548086 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2f6d18-d7b6-4c05-a90c-e6bf83a58862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708b7e3ba090c334bf7618665cd0f95d512059d204892851f4005e741b29aedf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:09:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:09:01.996048 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:09:01.996167 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:09:01.997001 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3601362715/tls.crt::/tmp/serving-cert-3601362715/tls.key\\\\\\\"\\\\nI0224 09:09:02.405618 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:09:02.409894 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:09:02.409954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:09:02.410013 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:09:02.410028 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:09:02.418769 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:09:02.418799 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:09:02.418814 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418828 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418838 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:09:02.418846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:09:02.418853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:09:02.418860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:09:02.423036 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:26 crc kubenswrapper[4822]: I0224 09:10:26.564310 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffd53620-3610-479d-8061-abfe62314da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bdde25238c492704d6d80e1023ad200b9fd1bd5d319b516ccc24b2aeea4fd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aabc4776b2fe2c18694894241153d4af330a61847bbb0ca4e8554c9526112c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aabc4776b2fe2c18694894241153d4af330a61847bbb0ca4e8554c9526112c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:26 crc kubenswrapper[4822]: I0224 09:10:26.581792 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888ca80a64aca876c966570ab2bca27d2f1d4e80fe42443a170382671bf5a002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a3802aceae54e33e3c19d855be2bd17cdb18cc4ff7110425685551e2fbff21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:26 crc kubenswrapper[4822]: I0224 09:10:26.602702 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81db2617c595292860af147428f1fdcece1db664671dfa8b9d1ad69cf77251a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96124054c109e534a338814cf39c5948da05e8ec3dd7e62985d440380310c97a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:10:17Z\\\",\\\"message\\\":\\\"2026-02-24T09:09:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c2d0b039-b7c7-4b61-b7d6-e17e88fd37b8\\\\n2026-02-24T09:09:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c2d0b039-b7c7-4b61-b7d6-e17e88fd37b8 to /host/opt/cni/bin/\\\\n2026-02-24T09:09:32Z [verbose] multus-daemon started\\\\n2026-02-24T09:09:32Z [verbose] Readiness Indicator file check\\\\n2026-02-24T09:10:17Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:27 crc kubenswrapper[4822]: I0224 09:10:27.336552 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:10:27 crc kubenswrapper[4822]: I0224 09:10:27.336662 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:10:27 crc kubenswrapper[4822]: E0224 09:10:27.336775 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:10:27 crc kubenswrapper[4822]: I0224 09:10:27.336794 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:10:27 crc kubenswrapper[4822]: I0224 09:10:27.336817 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:10:27 crc kubenswrapper[4822]: E0224 09:10:27.337010 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:10:27 crc kubenswrapper[4822]: E0224 09:10:27.337170 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:10:27 crc kubenswrapper[4822]: E0224 09:10:27.337347 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:10:28 crc kubenswrapper[4822]: I0224 09:10:28.359351 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888ca80a64aca876c966570ab2bca27d2f1d4e80fe42443a170382671bf5a002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a3802aceae54e33e3c19d855be2bd17cdb18cc4ff7110425685551e2fbff21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:28Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:28 crc kubenswrapper[4822]: I0224 09:10:28.381248 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:28Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:28 crc kubenswrapper[4822]: I0224 09:10:28.400352 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22174b11b108be065985e9809bc93e49dcb17e237f5054e2e5771b82e7e42f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:28Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:28 crc kubenswrapper[4822]: I0224 09:10:28.418845 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:28Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:28 crc kubenswrapper[4822]: I0224 09:10:28.437846 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714c471f8ec139e9610d9c3e7942cc91fc7c3e09d6b9d83a9498acc804542540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d72e2a80f6b15e7b1f86206da1682dfae29bb1bf1dcca20ca38556ba3fc2c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:28Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:28 crc kubenswrapper[4822]: I0224 09:10:28.455275 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7acc17a2035f017ceaa15e39eb12606e38a5902fdc87c6671ac2628ebaa4a4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:28Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:28 crc kubenswrapper[4822]: I0224 09:10:28.479569 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2f6d18-d7b6-4c05-a90c-e6bf83a58862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708b7e3ba090c334bf7618665cd0f95d512059d204892851f4005e741b29aedf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:09:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:09:01.996048 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:09:01.996167 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:09:01.997001 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3601362715/tls.crt::/tmp/serving-cert-3601362715/tls.key\\\\\\\"\\\\nI0224 09:09:02.405618 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:09:02.409894 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:09:02.409954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:09:02.410013 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:09:02.410028 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:09:02.418769 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:09:02.418799 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:09:02.418814 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418828 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418838 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:09:02.418846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:09:02.418853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:09:02.418860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:09:02.423036 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:28Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:28 crc kubenswrapper[4822]: I0224 09:10:28.498054 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffd53620-3610-479d-8061-abfe62314da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bdde25238c492704d6d80e1023ad200b9fd1bd5d319b516ccc24b2aeea4fd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aabc4776b2fe2c18694894241153d4af330a61847bbb0ca4e8554c9526112c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aabc4776b2fe2c18694894241153d4af330a61847bbb0ca4e8554c9526112c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:28Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:28 crc kubenswrapper[4822]: E0224 09:10:28.509224 4822 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 09:10:28 crc kubenswrapper[4822]: I0224 09:10:28.520764 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81db2617c595292860af147428f1fdcece1db664671dfa8b9d1ad69cf77251a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96124054c109e534a338814cf39c5948da05e8ec3dd7e62985d440380310c97a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:10:17Z\\\",\\\"message\\\":\\\"2026-02-24T09:09:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c2d0b039-b7c7-4b61-b7d6-e17e88fd37b8\\\\n2026-02-24T09:09:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c2d0b039-b7c7-4b61-b7d6-e17e88fd37b8 to /host/opt/cni/bin/\\\\n2026-02-24T09:09:32Z [verbose] multus-daemon started\\\\n2026-02-24T09:09:32Z [verbose] Readiness Indicator file check\\\\n2026-02-24T09:10:17Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:28Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:28 crc kubenswrapper[4822]: I0224 09:10:28.542631 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:28Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:28 crc kubenswrapper[4822]: I0224 09:10:28.563614 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4525470935b533581c4bf2d12d8cc18425b8fc363f2b501ab7773882832f112d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:28Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:28 crc kubenswrapper[4822]: I0224 09:10:28.583607 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://576c14462c61f59afdca39fedf2d6ddd7b7c3a779afcc6edbeab01c2fad8c6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d090a61be59e6ddbda21197c89b0a80c4dc7ef91c5cb3e3c589c5d21b7018bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:28Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:28 crc kubenswrapper[4822]: I0224 09:10:28.617710 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:10:24Z\\\",\\\"message\\\":\\\"org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}\\\\nI0224 09:10:24.454347 7187 services_controller.go:360] Finished syncing service router-internal-default on namespace openshift-ingress for network=default : 1.392471ms\\\\nI0224 09:10:24.454373 7187 factory.go:1336] Added *v1.Node event handler 7\\\\nI0224 09:10:24.454405 7187 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0224 09:10:24.454525 7187 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0224 09:10:24.454560 7187 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 09:10:24.454655 7187 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0224 09:10:24.454658 7187 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 09:10:24.454695 7187 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 09:10:24.454700 7187 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0224 09:10:24.454719 7187 factory.go:656] Stopping watch factory\\\\nI0224 09:10:24.454750 7187 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0224 09:10:24.454778 7187 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0224 09:10:24.454810 7187 ovnkube.go:599] Stopped ovnkube\\\\nI0224 09:10:24.454834 7187 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0224 09:10:24.454896 7187 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:10:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-669bp_openshift-ovn-kubernetes(72f416e6-5647-4b65-b06f-df73aca5e594)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:28Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:28 crc kubenswrapper[4822]: I0224 09:10:28.644421 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1151d10473d8bce326113fb334f2863e8577b504973f59405b67008b2cd06581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:28Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:28 crc kubenswrapper[4822]: I0224 09:10:28.682190 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba152bf-73f0-4fbd-9814-864ca1f28e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f0ebb59005449668c1b492bdc6155afb2b414ca3abca8e05e2b43aa59d23886\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9078ca880a9ab26a4423effb527cfd1cb2c5288403daf02462f1286ddac0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba71fec71e1dff40dd70be718d077ad208ca4aa83726e186ddfee661bcd468a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4764701e648ba7fc6285bf69cac5297af5a21b2e2b3904237e3afe0fd049b1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6240de59556b2c3fd8e26937945a53c5f216dce135f0b0024c1fd8ba8b1fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bf9901f8a5e1a52726573e43d614a77c1305b59426fead5d4f15e27c3ef06d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3bf9901f8a5e1a52726573e43d614a77c1305b59426fead5d4f15e27c3ef06d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b9168effe46ebec3888f923da1ee54e392258872f44ffabc71f5ec52ef4d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6b9168effe46ebec3888f923da1ee54e392258872f44ffabc71f5ec52ef4d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f0d6ba901fd909408637d654debaf0ccb14392ed6d2fa294e8ffb443f7a9ef80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d6ba901fd909408637d654debaf0ccb14392ed6d2fa294e8ffb443f7a9ef80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:28Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:28 crc kubenswrapper[4822]: I0224 09:10:28.704711 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b20f634d-42fc-410d-bd88-f8a5217a5100\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049e050ef9f1991307128e875cc6a4e29240a5e7c14c7e35686cef5ea40075d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://161dc5372c480c4c6401e2a366da2472613d6e1b70e07ed5d2b13e605149b71b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:08:24Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 09:08:00.651277 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 09:08:00.655420 1 observer_polling.go:159] Starting file observer\\\\nI0224 09:08:00.692273 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 09:08:00.698616 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 09:08:24.530014 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 09:08:24.530093 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:24Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd8556c3d55934657eb33e924c817164ec4c4ee3daf720283bd5d5a5eb849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5119b48d4016d083d02c6ccdb47e89a0d6b90e9907e9d21c0a37236be086453\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1dc0ba5c502740227408242b13ac68bf465d12f051a771295d69c245bc2b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:28Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:28 crc kubenswrapper[4822]: I0224 09:10:28.727448 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:28Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:28 crc kubenswrapper[4822]: I0224 09:10:28.747590 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb999ada-8542-46ce-82c8-e8f469a5bc18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bb23ad458041922bc4ccc692449b952d9aab77f8badb82f1f33355cb82f899c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e084b810bf294f0dc2565fa6199fa484a66a33d2d9a1c2192a71e2a6fbd6cc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30bed1504be2ca00734175abe924a77562966abc782073db4d997ecf4bbb92c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1ef5ebe08827dc25ed17949b250929b0d9e9b5c3a68aad23ca6ba0b71c28462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1ef5ebe08827dc25ed17949b250929b0d9e9b5c3a68aad23ca6ba0b71c28462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:28Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:28 crc kubenswrapper[4822]: I0224 09:10:28.771596 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72cfa68249a7e55b890cfa5284ef4c5e10dae317eeb6be34edc9d45803723e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:28Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:29 crc kubenswrapper[4822]: I0224 09:10:29.337008 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:10:29 crc kubenswrapper[4822]: I0224 09:10:29.337074 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:10:29 crc kubenswrapper[4822]: I0224 09:10:29.337091 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:10:29 crc kubenswrapper[4822]: I0224 09:10:29.337213 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:10:29 crc kubenswrapper[4822]: E0224 09:10:29.337210 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:10:29 crc kubenswrapper[4822]: E0224 09:10:29.337329 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:10:29 crc kubenswrapper[4822]: E0224 09:10:29.337432 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:10:29 crc kubenswrapper[4822]: E0224 09:10:29.337618 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:10:31 crc kubenswrapper[4822]: I0224 09:10:31.337226 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:10:31 crc kubenswrapper[4822]: I0224 09:10:31.337329 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:10:31 crc kubenswrapper[4822]: I0224 09:10:31.337276 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:10:31 crc kubenswrapper[4822]: I0224 09:10:31.337270 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:10:31 crc kubenswrapper[4822]: E0224 09:10:31.337480 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:10:31 crc kubenswrapper[4822]: E0224 09:10:31.337597 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:10:31 crc kubenswrapper[4822]: E0224 09:10:31.337667 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:10:31 crc kubenswrapper[4822]: E0224 09:10:31.337764 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:10:31 crc kubenswrapper[4822]: I0224 09:10:31.694648 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:31 crc kubenswrapper[4822]: I0224 09:10:31.694747 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:31 crc kubenswrapper[4822]: I0224 09:10:31.694774 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:31 crc kubenswrapper[4822]: I0224 09:10:31.694805 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:10:31 crc kubenswrapper[4822]: I0224 09:10:31.694832 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:10:31Z","lastTransitionTime":"2026-02-24T09:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:10:31 crc kubenswrapper[4822]: E0224 09:10:31.716895 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:31Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:31 crc kubenswrapper[4822]: I0224 09:10:31.723339 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:31 crc kubenswrapper[4822]: I0224 09:10:31.723442 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:31 crc kubenswrapper[4822]: I0224 09:10:31.723462 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:31 crc kubenswrapper[4822]: I0224 09:10:31.723488 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:10:31 crc kubenswrapper[4822]: I0224 09:10:31.723504 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:10:31Z","lastTransitionTime":"2026-02-24T09:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:10:31 crc kubenswrapper[4822]: E0224 09:10:31.744189 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:31Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:31 crc kubenswrapper[4822]: I0224 09:10:31.749463 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:31 crc kubenswrapper[4822]: I0224 09:10:31.749523 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:31 crc kubenswrapper[4822]: I0224 09:10:31.749547 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:31 crc kubenswrapper[4822]: I0224 09:10:31.749579 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:10:31 crc kubenswrapper[4822]: I0224 09:10:31.749601 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:10:31Z","lastTransitionTime":"2026-02-24T09:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:10:31 crc kubenswrapper[4822]: E0224 09:10:31.770297 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:31Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:31 crc kubenswrapper[4822]: I0224 09:10:31.776014 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:31 crc kubenswrapper[4822]: I0224 09:10:31.776077 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:31 crc kubenswrapper[4822]: I0224 09:10:31.776097 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:31 crc kubenswrapper[4822]: I0224 09:10:31.776125 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:10:31 crc kubenswrapper[4822]: I0224 09:10:31.776145 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:10:31Z","lastTransitionTime":"2026-02-24T09:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:10:31 crc kubenswrapper[4822]: E0224 09:10:31.797947 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:31Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:31 crc kubenswrapper[4822]: I0224 09:10:31.802977 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:31 crc kubenswrapper[4822]: I0224 09:10:31.803040 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:31 crc kubenswrapper[4822]: I0224 09:10:31.803063 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:31 crc kubenswrapper[4822]: I0224 09:10:31.803089 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:10:31 crc kubenswrapper[4822]: I0224 09:10:31.803109 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:10:31Z","lastTransitionTime":"2026-02-24T09:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:10:31 crc kubenswrapper[4822]: E0224 09:10:31.826746 4822 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a5c8732e-3240-474d-97f2-cd9f2c6e22aa\\\",\\\"systemUUID\\\":\\\"a2c96f03-56a9-40d5-9ba9-563a1da7316d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:31Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:31 crc kubenswrapper[4822]: E0224 09:10:31.827124 4822 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 09:10:33 crc kubenswrapper[4822]: I0224 09:10:33.336863 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:10:33 crc kubenswrapper[4822]: I0224 09:10:33.336890 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:10:33 crc kubenswrapper[4822]: E0224 09:10:33.337312 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:10:33 crc kubenswrapper[4822]: I0224 09:10:33.337051 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:10:33 crc kubenswrapper[4822]: I0224 09:10:33.336890 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:10:33 crc kubenswrapper[4822]: E0224 09:10:33.337470 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:10:33 crc kubenswrapper[4822]: E0224 09:10:33.337583 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:10:33 crc kubenswrapper[4822]: E0224 09:10:33.337698 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:10:33 crc kubenswrapper[4822]: E0224 09:10:33.511312 4822 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 09:10:35 crc kubenswrapper[4822]: I0224 09:10:35.336666 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:10:35 crc kubenswrapper[4822]: I0224 09:10:35.336747 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:10:35 crc kubenswrapper[4822]: I0224 09:10:35.336777 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:10:35 crc kubenswrapper[4822]: I0224 09:10:35.336696 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:10:35 crc kubenswrapper[4822]: E0224 09:10:35.336886 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:10:35 crc kubenswrapper[4822]: E0224 09:10:35.337031 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:10:35 crc kubenswrapper[4822]: E0224 09:10:35.337110 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:10:35 crc kubenswrapper[4822]: E0224 09:10:35.337162 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:10:37 crc kubenswrapper[4822]: I0224 09:10:37.337194 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:10:37 crc kubenswrapper[4822]: E0224 09:10:37.338157 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:10:37 crc kubenswrapper[4822]: I0224 09:10:37.337329 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:10:37 crc kubenswrapper[4822]: E0224 09:10:37.338267 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:10:37 crc kubenswrapper[4822]: I0224 09:10:37.337282 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:10:37 crc kubenswrapper[4822]: I0224 09:10:37.337370 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:10:37 crc kubenswrapper[4822]: E0224 09:10:37.338358 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:10:37 crc kubenswrapper[4822]: E0224 09:10:37.338517 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:10:38 crc kubenswrapper[4822]: I0224 09:10:38.366279 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b20f634d-42fc-410d-bd88-f8a5217a5100\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049e050ef9f1991307128e875cc6a4e29240a5e7c14c7e35686cef5ea40075d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://161dc5372c480c4c6401e2a366da2472613d6e1b70e07ed5d2b13e605149b71b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:08:24Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 09:08:00.651277 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 09:08:00.655420 1 observer_polling.go:159] Starting file observer\\\\nI0224 09:08:00.692273 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 09:08:00.698616 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 09:08:24.530014 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 09:08:24.530093 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:08:24Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd8556c3d55934657eb33e924c817164ec4c4ee3daf720283bd5d5a5eb849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5119b48d4016d083d02c6ccdb47e89a0d6b90e9907e9d21c0a37236be086453\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1dc0ba5c502740227408242b13ac68bf465d12f051a771295d69c245bc2b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:38 crc kubenswrapper[4822]: I0224 09:10:38.387065 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:38 crc kubenswrapper[4822]: I0224 09:10:38.408642 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4525470935b533581c4bf2d12d8cc18425b8fc363f2b501ab7773882832f112d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:38 crc kubenswrapper[4822]: I0224 09:10:38.426487 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fada5a7-935e-4bd3-931b-082fea67a9ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://576c14462c61f59afdca39fedf2d6ddd7b7c3a779afcc6edbeab01c2fad8c6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d090a61be59e6ddbda21197c89b0a80c4dc7ef91c5cb3e3c589c5d21b7018bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chdwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gmrxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:38 crc kubenswrapper[4822]: I0224 09:10:38.462321 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f416e6-5647-4b65-b06f-df73aca5e594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:10:24Z\\\",\\\"message\\\":\\\"org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}\\\\nI0224 09:10:24.454347 7187 services_controller.go:360] Finished syncing service router-internal-default on namespace openshift-ingress for network=default : 1.392471ms\\\\nI0224 09:10:24.454373 7187 factory.go:1336] Added *v1.Node event handler 7\\\\nI0224 09:10:24.454405 7187 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0224 09:10:24.454525 7187 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0224 09:10:24.454560 7187 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 09:10:24.454655 7187 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0224 09:10:24.454658 7187 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 09:10:24.454695 7187 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 09:10:24.454700 7187 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0224 09:10:24.454719 7187 factory.go:656] Stopping watch factory\\\\nI0224 09:10:24.454750 7187 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0224 09:10:24.454778 7187 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0224 09:10:24.454810 7187 ovnkube.go:599] Stopped ovnkube\\\\nI0224 09:10:24.454834 7187 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0224 09:10:24.454896 7187 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:10:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-669bp_openshift-ovn-kubernetes(72f416e6-5647-4b65-b06f-df73aca5e594)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg8qv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-669bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:38 crc kubenswrapper[4822]: I0224 09:10:38.486314 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cw98v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e7eb7-5eca-40b1-b7b8-6683604024ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1151d10473d8bce326113fb334f2863e8577b504973f59405b67008b2cd06581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d21b35080f8c3cb9211a0f094ddff13800c443f304cc32dd7c2957bd62daedd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb54704350466f5eb68f9c410652ee4614a3408248fb36da6766844fdf424c28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48fdb9d806eed5b45bed4cede3df8987d838795d84fe0f536087a7e842e62537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9aad0fdfd3ea1f12f0dc2400b343551388becc6dee36a38dd830f627e5b51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e92e2ae76a648e6431f3f3f94fbaa085f444cd79f75f6e4a500cdbc2bce60f97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e44fa9d507da2492de178a120c0819dc1ae0a24689515bd14667489fe38bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:09:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzvv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cw98v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:38 crc kubenswrapper[4822]: E0224 09:10:38.513333 4822 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 09:10:38 crc kubenswrapper[4822]: I0224 09:10:38.519777 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba152bf-73f0-4fbd-9814-864ca1f28e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f0ebb59005449668c1b492bdc6155afb2b414ca3abca8e05e2b43aa59d23886\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9078ca880a9ab26a4423effb527cfd1cb2c5288403daf02462f1286ddac0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba71fec71e1dff40dd70be718d077ad208ca4aa83726e186ddfee661bcd468a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4764701e648ba7fc6285bf69cac5297af5a21b2e2b3904237e3afe0fd049b1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6240de59556b2c3fd8e26937945a53c5f216dce135f0b0024c1fd8ba8b1fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bf9901f8a5e1a52726573e43d614a77c1305b59426fead5d4f15e27c3ef06d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3bf9901f8a5e1a52726573e43d614a77c1305b59426fead5d4f15e27c3ef06d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b9168effe46ebec3888f923da1ee54e392258872f44ffabc71f5ec52ef4d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6b9168effe46ebec3888f923da1ee54e392258872f44ffabc71f5ec52ef4d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f0d6ba901fd909408637d654debaf0ccb14392ed6d2fa294e8ffb443f7a9ef80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d6ba901fd909408637d654debaf0ccb14392ed6d2fa294e8ffb443f7a9ef80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:38 crc kubenswrapper[4822]: I0224 09:10:38.545219 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72cfa68249a7e55b890cfa5284ef4c5e10dae317eeb6be34edc9d45803723e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:38 crc kubenswrapper[4822]: I0224 09:10:38.567454 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:38 crc kubenswrapper[4822]: I0224 09:10:38.585437 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb999ada-8542-46ce-82c8-e8f469a5bc18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bb23ad458041922bc4ccc692449b952d9aab77f8badb82f1f33355cb82f899c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e084b810bf294f0dc2565fa6199fa484a66a33d2d9a1c2192a71e2a6fbd6cc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30bed1504be2ca00734175abe924a77562966abc782073db4d997ecf4bbb92c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1ef5ebe08827dc25ed17949b250929b0d9e9b5c3a68aad23ca6ba0b71c28462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1ef5ebe08827dc25ed17949b250929b0d9e9b5c3a68aad23ca6ba0b71c28462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:38 crc kubenswrapper[4822]: I0224 09:10:38.602219 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffd53620-3610-479d-8061-abfe62314da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bdde25238c492704d6d80e1023ad200b9fd1bd5d319b516ccc24b2aeea4fd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aabc4776b2fe2c18694894241153d4af330a61847bbb0ca4e8554c9526112c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aabc4776b2fe2c18694894241153d4af330a61847bbb0ca4e8554c9526112c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:38 crc kubenswrapper[4822]: I0224 09:10:38.622475 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888ca80a64aca876c966570ab2bca27d2f1d4e80fe42443a170382671bf5a002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a3802aceae54e33e3c19d855be2bd17cdb18cc4ff7110425685551e2fbff21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:38 crc kubenswrapper[4822]: I0224 09:10:38.642250 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:38 crc kubenswrapper[4822]: I0224 09:10:38.658601 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fbp47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cc2023-21a7-4205-9492-ec1d1a0d146b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22174b11b108be065985e9809bc93e49dcb17e237f5054e2e5771b82e7e42f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qvql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fbp47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:38 crc kubenswrapper[4822]: I0224 09:10:38.676684 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htbq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51aff12-328f-4b79-8dbb-2079510f45dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g75t9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htbq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:38 crc kubenswrapper[4822]: I0224 09:10:38.694904 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"306aba52-0b6e-4d3f-b05f-757daebc5e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714c471f8ec139e9610d9c3e7942cc91fc7c3e09d6b9d83a9498acc804542540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d72e2a80f6b15e7b1f86206da1682dfae29bb1bf1dcca20ca38556ba3fc2c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwn4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd752\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:38 crc kubenswrapper[4822]: I0224 09:10:38.712998 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t2gjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08191894-6514-4c09-aab9-e6c8f0f52354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7acc17a2035f017ceaa15e39eb12606e38a5902fdc87c6671ac2628ebaa4a4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t2gjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:38 crc kubenswrapper[4822]: I0224 09:10:38.735606 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2f6d18-d7b6-4c05-a90c-e6bf83a58862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708b7e3ba090c334bf7618665cd0f95d512059d204892851f4005e741b29aedf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:09:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:09:01.996048 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:09:01.996167 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:09:01.997001 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3601362715/tls.crt::/tmp/serving-cert-3601362715/tls.key\\\\\\\"\\\\nI0224 09:09:02.405618 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:09:02.409894 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:09:02.409954 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:09:02.410013 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:09:02.410028 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:09:02.418769 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:09:02.418799 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:09:02.418814 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418828 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:09:02.418838 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:09:02.418846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:09:02.418853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:09:02.418860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:09:02.423036 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:08:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:07:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:38 crc kubenswrapper[4822]: I0224 09:10:38.755906 4822 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqrzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b654a4-010b-4a5e-b2d8-d42764fcb628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81db2617c595292860af147428f1fdcece1db664671dfa8b9d1ad69cf77251a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96124054c109e534a338814cf39c5948da05e8ec3dd7e62985d440380310c97a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:10:17Z\\\",\\\"message\\\":\\\"2026-02-24T09:09:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c2d0b039-b7c7-4b61-b7d6-e17e88fd37b8\\\\n2026-02-24T09:09:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c2d0b039-b7c7-4b61-b7d6-e17e88fd37b8 to /host/opt/cni/bin/\\\\n2026-02-24T09:09:32Z [verbose] multus-daemon started\\\\n2026-02-24T09:09:32Z [verbose] Readiness Indicator file check\\\\n2026-02-24T09:10:17Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:09:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g779d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:09:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqrzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:10:39 crc kubenswrapper[4822]: I0224 09:10:39.337348 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:10:39 crc kubenswrapper[4822]: I0224 09:10:39.337424 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:10:39 crc kubenswrapper[4822]: I0224 09:10:39.337380 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:10:39 crc kubenswrapper[4822]: I0224 09:10:39.338140 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:10:39 crc kubenswrapper[4822]: E0224 09:10:39.338368 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:10:39 crc kubenswrapper[4822]: E0224 09:10:39.338546 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:10:39 crc kubenswrapper[4822]: E0224 09:10:39.338680 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:10:39 crc kubenswrapper[4822]: E0224 09:10:39.338851 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:10:41 crc kubenswrapper[4822]: I0224 09:10:41.336562 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:10:41 crc kubenswrapper[4822]: I0224 09:10:41.336641 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:10:41 crc kubenswrapper[4822]: E0224 09:10:41.336768 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:10:41 crc kubenswrapper[4822]: I0224 09:10:41.336800 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:10:41 crc kubenswrapper[4822]: I0224 09:10:41.337329 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:10:41 crc kubenswrapper[4822]: E0224 09:10:41.337440 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:10:41 crc kubenswrapper[4822]: I0224 09:10:41.337906 4822 scope.go:117] "RemoveContainer" containerID="43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6" Feb 24 09:10:41 crc kubenswrapper[4822]: E0224 09:10:41.338222 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-669bp_openshift-ovn-kubernetes(72f416e6-5647-4b65-b06f-df73aca5e594)\"" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" Feb 24 09:10:41 crc kubenswrapper[4822]: E0224 09:10:41.338318 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:10:41 crc kubenswrapper[4822]: E0224 09:10:41.339071 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:10:42 crc kubenswrapper[4822]: I0224 09:10:42.113251 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:42 crc kubenswrapper[4822]: I0224 09:10:42.113327 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:42 crc kubenswrapper[4822]: I0224 09:10:42.113347 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:42 crc kubenswrapper[4822]: I0224 09:10:42.113373 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:10:42 crc kubenswrapper[4822]: I0224 09:10:42.113391 4822 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:10:42Z","lastTransitionTime":"2026-02-24T09:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:10:42 crc kubenswrapper[4822]: I0224 09:10:42.188448 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-6g9k4"] Feb 24 09:10:42 crc kubenswrapper[4822]: I0224 09:10:42.189143 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6g9k4" Feb 24 09:10:42 crc kubenswrapper[4822]: I0224 09:10:42.191170 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 24 09:10:42 crc kubenswrapper[4822]: I0224 09:10:42.192224 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 24 09:10:42 crc kubenswrapper[4822]: I0224 09:10:42.192232 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 24 09:10:42 crc kubenswrapper[4822]: I0224 09:10:42.192248 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 24 09:10:42 crc kubenswrapper[4822]: I0224 09:10:42.205472 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7283588d-2899-473a-9e20-2d973817d8c9-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6g9k4\" (UID: \"7283588d-2899-473a-9e20-2d973817d8c9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6g9k4" Feb 24 09:10:42 crc kubenswrapper[4822]: I0224 09:10:42.205604 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7283588d-2899-473a-9e20-2d973817d8c9-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6g9k4\" (UID: \"7283588d-2899-473a-9e20-2d973817d8c9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6g9k4" Feb 24 09:10:42 crc kubenswrapper[4822]: I0224 09:10:42.205644 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7283588d-2899-473a-9e20-2d973817d8c9-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6g9k4\" (UID: \"7283588d-2899-473a-9e20-2d973817d8c9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6g9k4" Feb 24 09:10:42 crc kubenswrapper[4822]: I0224 09:10:42.205714 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7283588d-2899-473a-9e20-2d973817d8c9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6g9k4\" (UID: \"7283588d-2899-473a-9e20-2d973817d8c9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6g9k4" Feb 24 09:10:42 crc kubenswrapper[4822]: I0224 09:10:42.205764 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7283588d-2899-473a-9e20-2d973817d8c9-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6g9k4\" (UID: \"7283588d-2899-473a-9e20-2d973817d8c9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6g9k4" Feb 24 09:10:42 crc kubenswrapper[4822]: I0224 09:10:42.286046 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gmrxl" podStartSLOduration=119.286011317 podStartE2EDuration="1m59.286011317s" podCreationTimestamp="2026-02-24 09:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:10:42.252670642 +0000 UTC m=+164.640433210" watchObservedRunningTime="2026-02-24 09:10:42.286011317 +0000 UTC m=+164.673773915" Feb 24 09:10:42 crc kubenswrapper[4822]: I0224 09:10:42.306246 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-cw98v" podStartSLOduration=119.306214773 podStartE2EDuration="1m59.306214773s" podCreationTimestamp="2026-02-24 09:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:10:42.303681968 +0000 UTC m=+164.691444556" watchObservedRunningTime="2026-02-24 09:10:42.306214773 +0000 UTC m=+164.693977361" Feb 24 09:10:42 crc kubenswrapper[4822]: I0224 09:10:42.306320 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7283588d-2899-473a-9e20-2d973817d8c9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6g9k4\" (UID: \"7283588d-2899-473a-9e20-2d973817d8c9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6g9k4" Feb 24 09:10:42 crc kubenswrapper[4822]: I0224 09:10:42.306375 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7283588d-2899-473a-9e20-2d973817d8c9-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6g9k4\" (UID: \"7283588d-2899-473a-9e20-2d973817d8c9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6g9k4" Feb 24 09:10:42 crc kubenswrapper[4822]: I0224 09:10:42.306424 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7283588d-2899-473a-9e20-2d973817d8c9-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6g9k4\" (UID: \"7283588d-2899-473a-9e20-2d973817d8c9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6g9k4" Feb 24 09:10:42 crc kubenswrapper[4822]: I0224 09:10:42.306457 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7283588d-2899-473a-9e20-2d973817d8c9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6g9k4\" (UID: \"7283588d-2899-473a-9e20-2d973817d8c9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6g9k4" Feb 24 09:10:42 crc kubenswrapper[4822]: I0224 09:10:42.306491 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7283588d-2899-473a-9e20-2d973817d8c9-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6g9k4\" (UID: \"7283588d-2899-473a-9e20-2d973817d8c9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6g9k4" Feb 24 09:10:42 crc kubenswrapper[4822]: I0224 09:10:42.306603 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7283588d-2899-473a-9e20-2d973817d8c9-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6g9k4\" (UID: \"7283588d-2899-473a-9e20-2d973817d8c9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6g9k4" Feb 24 09:10:42 crc kubenswrapper[4822]: I0224 09:10:42.306630 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7283588d-2899-473a-9e20-2d973817d8c9-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6g9k4\" (UID: \"7283588d-2899-473a-9e20-2d973817d8c9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6g9k4" Feb 24 09:10:42 crc kubenswrapper[4822]: I0224 09:10:42.307943 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7283588d-2899-473a-9e20-2d973817d8c9-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6g9k4\" (UID: \"7283588d-2899-473a-9e20-2d973817d8c9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6g9k4" Feb 24 09:10:42 crc kubenswrapper[4822]: I0224 09:10:42.312054 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7283588d-2899-473a-9e20-2d973817d8c9-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6g9k4\" (UID: \"7283588d-2899-473a-9e20-2d973817d8c9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6g9k4" Feb 24 09:10:42 crc kubenswrapper[4822]: I0224 09:10:42.323350 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7283588d-2899-473a-9e20-2d973817d8c9-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6g9k4\" (UID: \"7283588d-2899-473a-9e20-2d973817d8c9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6g9k4" Feb 24 09:10:42 crc kubenswrapper[4822]: I0224 09:10:42.332953 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=56.332892401 podStartE2EDuration="56.332892401s" podCreationTimestamp="2026-02-24 09:09:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:10:42.332650243 +0000 UTC m=+164.720412801" watchObservedRunningTime="2026-02-24 09:10:42.332892401 +0000 UTC m=+164.720654969" Feb 24 09:10:42 crc kubenswrapper[4822]: I0224 09:10:42.351346 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=55.351303954 podStartE2EDuration="55.351303954s" podCreationTimestamp="2026-02-24 09:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:10:42.350393907 +0000 UTC m=+164.738156465" watchObservedRunningTime="2026-02-24 09:10:42.351303954 +0000 UTC m=+164.739066502" Feb 24 09:10:42 crc kubenswrapper[4822]: I0224 09:10:42.381838 4822 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 24 09:10:42 crc kubenswrapper[4822]: I0224 09:10:42.390988 4822 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 24 09:10:42 crc kubenswrapper[4822]: I0224 09:10:42.404808 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=45.404779513 podStartE2EDuration="45.404779513s" podCreationTimestamp="2026-02-24 09:09:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:10:42.404464094 +0000 UTC m=+164.792226652" watchObservedRunningTime="2026-02-24 09:10:42.404779513 +0000 UTC m=+164.792542101" Feb 24 09:10:42 crc kubenswrapper[4822]: I0224 09:10:42.472582 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-fbp47" podStartSLOduration=120.472549393 podStartE2EDuration="2m0.472549393s" podCreationTimestamp="2026-02-24 09:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:10:42.472549643 +0000 UTC m=+164.860312211" watchObservedRunningTime="2026-02-24 09:10:42.472549393 +0000 UTC m=+164.860311981" Feb 24 09:10:42 crc kubenswrapper[4822]: I0224 09:10:42.498267 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podStartSLOduration=120.498232773 podStartE2EDuration="2m0.498232773s" podCreationTimestamp="2026-02-24 09:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:10:42.497984614 +0000 UTC m=+164.885747182" watchObservedRunningTime="2026-02-24 09:10:42.498232773 +0000 UTC m=+164.885995371" Feb 24 09:10:42 crc kubenswrapper[4822]: I0224 09:10:42.506776 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6g9k4" Feb 24 09:10:42 crc kubenswrapper[4822]: I0224 09:10:42.511967 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-t2gjf" podStartSLOduration=120.511943446 podStartE2EDuration="2m0.511943446s" podCreationTimestamp="2026-02-24 09:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:10:42.510991009 +0000 UTC m=+164.898753587" watchObservedRunningTime="2026-02-24 09:10:42.511943446 +0000 UTC m=+164.899706034" Feb 24 09:10:42 crc kubenswrapper[4822]: I0224 09:10:42.537065 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=85.537033117 podStartE2EDuration="1m25.537033117s" podCreationTimestamp="2026-02-24 09:09:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:10:42.536834741 +0000 UTC m=+164.924597309" watchObservedRunningTime="2026-02-24 09:10:42.537033117 +0000 UTC m=+164.924795745" Feb 24 09:10:42 crc kubenswrapper[4822]: I0224 09:10:42.551194 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=27.551173385 podStartE2EDuration="27.551173385s" podCreationTimestamp="2026-02-24 09:10:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:10:42.550011181 +0000 UTC m=+164.937773749" watchObservedRunningTime="2026-02-24 09:10:42.551173385 +0000 UTC m=+164.938935953" Feb 24 09:10:42 crc kubenswrapper[4822]: I0224 09:10:42.565256 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-lqrzq" podStartSLOduration=119.56523727 podStartE2EDuration="1m59.56523727s" podCreationTimestamp="2026-02-24 09:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:10:42.564987283 +0000 UTC m=+164.952749851" watchObservedRunningTime="2026-02-24 09:10:42.56523727 +0000 UTC m=+164.952999828" Feb 24 09:10:43 crc kubenswrapper[4822]: I0224 09:10:43.299423 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6g9k4" event={"ID":"7283588d-2899-473a-9e20-2d973817d8c9","Type":"ContainerStarted","Data":"764d2d94453b85569f9f02329e1ae848c33d1cd4161c1cb608ca277b801aaed3"} Feb 24 09:10:43 crc kubenswrapper[4822]: I0224 09:10:43.299492 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6g9k4" event={"ID":"7283588d-2899-473a-9e20-2d973817d8c9","Type":"ContainerStarted","Data":"e1bfc6bb8a3cdc83bc9265fe7fbd4398151f80a2b7a0623c7e3cc08144065798"} Feb 24 09:10:43 crc kubenswrapper[4822]: I0224 09:10:43.322076 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6g9k4" podStartSLOduration=121.322050855 podStartE2EDuration="2m1.322050855s" podCreationTimestamp="2026-02-24 09:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:10:43.321465087 +0000 UTC m=+165.709227665" watchObservedRunningTime="2026-02-24 09:10:43.322050855 +0000 UTC m=+165.709813463" Feb 24 09:10:43 crc kubenswrapper[4822]: I0224 09:10:43.336575 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:10:43 crc kubenswrapper[4822]: I0224 09:10:43.336653 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:10:43 crc kubenswrapper[4822]: I0224 09:10:43.336966 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:10:43 crc kubenswrapper[4822]: I0224 09:10:43.337021 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:10:43 crc kubenswrapper[4822]: E0224 09:10:43.337137 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:10:43 crc kubenswrapper[4822]: E0224 09:10:43.337332 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:10:43 crc kubenswrapper[4822]: E0224 09:10:43.337527 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:10:43 crc kubenswrapper[4822]: E0224 09:10:43.337677 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:10:43 crc kubenswrapper[4822]: E0224 09:10:43.515355 4822 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 09:10:45 crc kubenswrapper[4822]: I0224 09:10:45.336784 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:10:45 crc kubenswrapper[4822]: I0224 09:10:45.336841 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:10:45 crc kubenswrapper[4822]: I0224 09:10:45.336868 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:10:45 crc kubenswrapper[4822]: I0224 09:10:45.336986 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:10:45 crc kubenswrapper[4822]: E0224 09:10:45.338569 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:10:45 crc kubenswrapper[4822]: E0224 09:10:45.338779 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:10:45 crc kubenswrapper[4822]: E0224 09:10:45.338691 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:10:45 crc kubenswrapper[4822]: E0224 09:10:45.338944 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:10:47 crc kubenswrapper[4822]: I0224 09:10:47.336332 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:10:47 crc kubenswrapper[4822]: I0224 09:10:47.336361 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:10:47 crc kubenswrapper[4822]: E0224 09:10:47.336514 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:10:47 crc kubenswrapper[4822]: E0224 09:10:47.336605 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:10:47 crc kubenswrapper[4822]: I0224 09:10:47.337124 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:10:47 crc kubenswrapper[4822]: I0224 09:10:47.337249 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:10:47 crc kubenswrapper[4822]: E0224 09:10:47.337305 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:10:47 crc kubenswrapper[4822]: E0224 09:10:47.337357 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:10:48 crc kubenswrapper[4822]: E0224 09:10:48.515964 4822 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 09:10:49 crc kubenswrapper[4822]: I0224 09:10:49.336871 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:10:49 crc kubenswrapper[4822]: I0224 09:10:49.336949 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:10:49 crc kubenswrapper[4822]: I0224 09:10:49.336985 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:10:49 crc kubenswrapper[4822]: I0224 09:10:49.336883 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:10:49 crc kubenswrapper[4822]: E0224 09:10:49.337188 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:10:49 crc kubenswrapper[4822]: E0224 09:10:49.337381 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:10:49 crc kubenswrapper[4822]: E0224 09:10:49.337459 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:10:49 crc kubenswrapper[4822]: E0224 09:10:49.337538 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:10:51 crc kubenswrapper[4822]: I0224 09:10:51.336611 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:10:51 crc kubenswrapper[4822]: I0224 09:10:51.336646 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:10:51 crc kubenswrapper[4822]: I0224 09:10:51.336612 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:10:51 crc kubenswrapper[4822]: E0224 09:10:51.336984 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:10:51 crc kubenswrapper[4822]: I0224 09:10:51.337051 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:10:51 crc kubenswrapper[4822]: E0224 09:10:51.337248 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:10:51 crc kubenswrapper[4822]: E0224 09:10:51.337344 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:10:51 crc kubenswrapper[4822]: E0224 09:10:51.337428 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:10:52 crc kubenswrapper[4822]: I0224 09:10:52.341470 4822 scope.go:117] "RemoveContainer" containerID="43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6" Feb 24 09:10:52 crc kubenswrapper[4822]: E0224 09:10:52.341968 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-669bp_openshift-ovn-kubernetes(72f416e6-5647-4b65-b06f-df73aca5e594)\"" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" Feb 24 09:10:53 crc kubenswrapper[4822]: I0224 09:10:53.337229 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:10:53 crc kubenswrapper[4822]: I0224 09:10:53.337330 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:10:53 crc kubenswrapper[4822]: I0224 09:10:53.337378 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:10:53 crc kubenswrapper[4822]: I0224 09:10:53.337571 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:10:53 crc kubenswrapper[4822]: E0224 09:10:53.337769 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:10:53 crc kubenswrapper[4822]: E0224 09:10:53.337850 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:10:53 crc kubenswrapper[4822]: E0224 09:10:53.337995 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:10:53 crc kubenswrapper[4822]: E0224 09:10:53.338097 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:10:53 crc kubenswrapper[4822]: E0224 09:10:53.516850 4822 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 09:10:55 crc kubenswrapper[4822]: I0224 09:10:55.337251 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:10:55 crc kubenswrapper[4822]: I0224 09:10:55.337337 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:10:55 crc kubenswrapper[4822]: I0224 09:10:55.337354 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:10:55 crc kubenswrapper[4822]: E0224 09:10:55.337425 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:10:55 crc kubenswrapper[4822]: I0224 09:10:55.337466 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:10:55 crc kubenswrapper[4822]: E0224 09:10:55.337532 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:10:55 crc kubenswrapper[4822]: E0224 09:10:55.337673 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:10:55 crc kubenswrapper[4822]: E0224 09:10:55.337823 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:10:57 crc kubenswrapper[4822]: I0224 09:10:57.336211 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:10:57 crc kubenswrapper[4822]: I0224 09:10:57.336212 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:10:57 crc kubenswrapper[4822]: I0224 09:10:57.336348 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:10:57 crc kubenswrapper[4822]: E0224 09:10:57.336633 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:10:57 crc kubenswrapper[4822]: E0224 09:10:57.337155 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:10:57 crc kubenswrapper[4822]: I0224 09:10:57.337287 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:10:57 crc kubenswrapper[4822]: E0224 09:10:57.337459 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:10:57 crc kubenswrapper[4822]: E0224 09:10:57.338052 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:10:58 crc kubenswrapper[4822]: E0224 09:10:58.517334 4822 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 09:10:59 crc kubenswrapper[4822]: I0224 09:10:59.336457 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:10:59 crc kubenswrapper[4822]: I0224 09:10:59.336514 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:10:59 crc kubenswrapper[4822]: I0224 09:10:59.336588 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:10:59 crc kubenswrapper[4822]: I0224 09:10:59.336657 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:10:59 crc kubenswrapper[4822]: E0224 09:10:59.336896 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:10:59 crc kubenswrapper[4822]: E0224 09:10:59.337171 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:10:59 crc kubenswrapper[4822]: E0224 09:10:59.337287 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:10:59 crc kubenswrapper[4822]: E0224 09:10:59.337477 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:11:01 crc kubenswrapper[4822]: I0224 09:11:01.336853 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:01 crc kubenswrapper[4822]: I0224 09:11:01.336965 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:11:01 crc kubenswrapper[4822]: I0224 09:11:01.336978 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:11:01 crc kubenswrapper[4822]: I0224 09:11:01.337478 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:11:01 crc kubenswrapper[4822]: E0224 09:11:01.337560 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:11:01 crc kubenswrapper[4822]: E0224 09:11:01.337749 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:11:01 crc kubenswrapper[4822]: E0224 09:11:01.337854 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:11:01 crc kubenswrapper[4822]: E0224 09:11:01.338027 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:11:03 crc kubenswrapper[4822]: I0224 09:11:03.336308 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:11:03 crc kubenswrapper[4822]: I0224 09:11:03.336344 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:11:03 crc kubenswrapper[4822]: I0224 09:11:03.336375 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:11:03 crc kubenswrapper[4822]: E0224 09:11:03.336521 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:11:03 crc kubenswrapper[4822]: I0224 09:11:03.336573 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:03 crc kubenswrapper[4822]: E0224 09:11:03.336750 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:11:03 crc kubenswrapper[4822]: E0224 09:11:03.337343 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:11:03 crc kubenswrapper[4822]: E0224 09:11:03.337443 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:11:03 crc kubenswrapper[4822]: I0224 09:11:03.338311 4822 scope.go:117] "RemoveContainer" containerID="43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6" Feb 24 09:11:03 crc kubenswrapper[4822]: E0224 09:11:03.338575 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-669bp_openshift-ovn-kubernetes(72f416e6-5647-4b65-b06f-df73aca5e594)\"" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" Feb 24 09:11:03 crc kubenswrapper[4822]: E0224 09:11:03.519589 4822 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 09:11:04 crc kubenswrapper[4822]: I0224 09:11:04.377736 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqrzq_90b654a4-010b-4a5e-b2d8-d42764fcb628/kube-multus/1.log" Feb 24 09:11:04 crc kubenswrapper[4822]: I0224 09:11:04.378434 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqrzq_90b654a4-010b-4a5e-b2d8-d42764fcb628/kube-multus/0.log" Feb 24 09:11:04 crc kubenswrapper[4822]: I0224 09:11:04.378493 4822 generic.go:334] "Generic (PLEG): container finished" podID="90b654a4-010b-4a5e-b2d8-d42764fcb628" containerID="81db2617c595292860af147428f1fdcece1db664671dfa8b9d1ad69cf77251a2" exitCode=1 Feb 24 09:11:04 crc kubenswrapper[4822]: I0224 09:11:04.378527 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lqrzq" event={"ID":"90b654a4-010b-4a5e-b2d8-d42764fcb628","Type":"ContainerDied","Data":"81db2617c595292860af147428f1fdcece1db664671dfa8b9d1ad69cf77251a2"} Feb 24 09:11:04 crc kubenswrapper[4822]: I0224 09:11:04.378580 4822 scope.go:117] "RemoveContainer" containerID="96124054c109e534a338814cf39c5948da05e8ec3dd7e62985d440380310c97a" Feb 24 09:11:04 crc kubenswrapper[4822]: I0224 09:11:04.379424 4822 scope.go:117] "RemoveContainer" containerID="81db2617c595292860af147428f1fdcece1db664671dfa8b9d1ad69cf77251a2" Feb 24 09:11:04 crc kubenswrapper[4822]: E0224 09:11:04.379807 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-lqrzq_openshift-multus(90b654a4-010b-4a5e-b2d8-d42764fcb628)\"" pod="openshift-multus/multus-lqrzq" podUID="90b654a4-010b-4a5e-b2d8-d42764fcb628" Feb 24 09:11:05 crc kubenswrapper[4822]: I0224 09:11:05.337000 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:11:05 crc kubenswrapper[4822]: I0224 09:11:05.337047 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:05 crc kubenswrapper[4822]: I0224 09:11:05.337007 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:11:05 crc kubenswrapper[4822]: I0224 09:11:05.337132 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:11:05 crc kubenswrapper[4822]: E0224 09:11:05.337185 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:11:05 crc kubenswrapper[4822]: E0224 09:11:05.337271 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:11:05 crc kubenswrapper[4822]: E0224 09:11:05.337581 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:11:05 crc kubenswrapper[4822]: E0224 09:11:05.337462 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:11:05 crc kubenswrapper[4822]: I0224 09:11:05.395097 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqrzq_90b654a4-010b-4a5e-b2d8-d42764fcb628/kube-multus/1.log" Feb 24 09:11:07 crc kubenswrapper[4822]: I0224 09:11:07.337221 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:11:07 crc kubenswrapper[4822]: I0224 09:11:07.337260 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:11:07 crc kubenswrapper[4822]: E0224 09:11:07.337474 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:11:07 crc kubenswrapper[4822]: I0224 09:11:07.337869 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:11:07 crc kubenswrapper[4822]: I0224 09:11:07.337885 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:07 crc kubenswrapper[4822]: E0224 09:11:07.338045 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:11:07 crc kubenswrapper[4822]: E0224 09:11:07.338211 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:11:07 crc kubenswrapper[4822]: E0224 09:11:07.338383 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:11:08 crc kubenswrapper[4822]: E0224 09:11:08.520277 4822 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 09:11:09 crc kubenswrapper[4822]: I0224 09:11:09.336963 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:11:09 crc kubenswrapper[4822]: I0224 09:11:09.337018 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:11:09 crc kubenswrapper[4822]: I0224 09:11:09.337029 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:09 crc kubenswrapper[4822]: I0224 09:11:09.336970 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:11:09 crc kubenswrapper[4822]: E0224 09:11:09.337132 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:11:09 crc kubenswrapper[4822]: E0224 09:11:09.337339 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:11:09 crc kubenswrapper[4822]: E0224 09:11:09.337455 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:11:09 crc kubenswrapper[4822]: E0224 09:11:09.337606 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:11:11 crc kubenswrapper[4822]: I0224 09:11:11.337182 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:11:11 crc kubenswrapper[4822]: I0224 09:11:11.337285 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:11 crc kubenswrapper[4822]: I0224 09:11:11.337364 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:11:11 crc kubenswrapper[4822]: E0224 09:11:11.337518 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:11:11 crc kubenswrapper[4822]: I0224 09:11:11.337583 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:11:11 crc kubenswrapper[4822]: E0224 09:11:11.337696 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:11:11 crc kubenswrapper[4822]: E0224 09:11:11.337756 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:11:11 crc kubenswrapper[4822]: E0224 09:11:11.337890 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:11:13 crc kubenswrapper[4822]: I0224 09:11:13.336900 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:13 crc kubenswrapper[4822]: I0224 09:11:13.336949 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:11:13 crc kubenswrapper[4822]: I0224 09:11:13.337033 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:11:13 crc kubenswrapper[4822]: I0224 09:11:13.337033 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:11:13 crc kubenswrapper[4822]: E0224 09:11:13.337107 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:11:13 crc kubenswrapper[4822]: E0224 09:11:13.337233 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:11:13 crc kubenswrapper[4822]: E0224 09:11:13.337319 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:11:13 crc kubenswrapper[4822]: E0224 09:11:13.337464 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:11:13 crc kubenswrapper[4822]: E0224 09:11:13.522172 4822 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 09:11:14 crc kubenswrapper[4822]: I0224 09:11:14.338372 4822 scope.go:117] "RemoveContainer" containerID="43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6" Feb 24 09:11:15 crc kubenswrapper[4822]: I0224 09:11:15.336577 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:11:15 crc kubenswrapper[4822]: I0224 09:11:15.336621 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:11:15 crc kubenswrapper[4822]: E0224 09:11:15.337325 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:11:15 crc kubenswrapper[4822]: I0224 09:11:15.336790 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:15 crc kubenswrapper[4822]: E0224 09:11:15.337103 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:11:15 crc kubenswrapper[4822]: I0224 09:11:15.336702 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:11:15 crc kubenswrapper[4822]: E0224 09:11:15.337728 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:11:15 crc kubenswrapper[4822]: E0224 09:11:15.337519 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:11:15 crc kubenswrapper[4822]: I0224 09:11:15.413476 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-htbq4"] Feb 24 09:11:15 crc kubenswrapper[4822]: I0224 09:11:15.436602 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-669bp_72f416e6-5647-4b65-b06f-df73aca5e594/ovnkube-controller/3.log" Feb 24 09:11:15 crc kubenswrapper[4822]: I0224 09:11:15.441718 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" event={"ID":"72f416e6-5647-4b65-b06f-df73aca5e594","Type":"ContainerStarted","Data":"0025bf71a189e0879ecb3d4f1e3305509bf5c27a75ade7f2aa2ede54783ee6ed"} Feb 24 09:11:15 crc kubenswrapper[4822]: I0224 09:11:15.441741 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:11:15 crc kubenswrapper[4822]: I0224 09:11:15.442386 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:11:15 crc kubenswrapper[4822]: E0224 09:11:15.442601 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:11:15 crc kubenswrapper[4822]: I0224 09:11:15.496620 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" podStartSLOduration=152.4965643 podStartE2EDuration="2m32.4965643s" podCreationTimestamp="2026-02-24 09:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:15.496464877 +0000 UTC m=+197.884227455" watchObservedRunningTime="2026-02-24 09:11:15.4965643 +0000 UTC m=+197.884326888" Feb 24 09:11:16 crc kubenswrapper[4822]: I0224 09:11:16.336627 4822 scope.go:117] "RemoveContainer" containerID="81db2617c595292860af147428f1fdcece1db664671dfa8b9d1ad69cf77251a2" Feb 24 09:11:16 crc kubenswrapper[4822]: I0224 09:11:16.447481 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqrzq_90b654a4-010b-4a5e-b2d8-d42764fcb628/kube-multus/1.log" Feb 24 09:11:16 crc kubenswrapper[4822]: I0224 09:11:16.448082 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lqrzq" event={"ID":"90b654a4-010b-4a5e-b2d8-d42764fcb628","Type":"ContainerStarted","Data":"e7027b0c5af7dc9663a7699b0b9ac4baf2f15c12a3c15d5cdb17b4a746845841"} Feb 24 09:11:17 crc kubenswrapper[4822]: I0224 09:11:17.336437 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:11:17 crc kubenswrapper[4822]: E0224 09:11:17.336906 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:11:17 crc kubenswrapper[4822]: I0224 09:11:17.337309 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:11:17 crc kubenswrapper[4822]: E0224 09:11:17.337668 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:11:17 crc kubenswrapper[4822]: I0224 09:11:17.337409 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:17 crc kubenswrapper[4822]: E0224 09:11:17.338045 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:11:17 crc kubenswrapper[4822]: I0224 09:11:17.336525 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:11:17 crc kubenswrapper[4822]: E0224 09:11:17.338373 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htbq4" podUID="f51aff12-328f-4b79-8dbb-2079510f45dc" Feb 24 09:11:19 crc kubenswrapper[4822]: I0224 09:11:19.336897 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:11:19 crc kubenswrapper[4822]: I0224 09:11:19.337020 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:19 crc kubenswrapper[4822]: I0224 09:11:19.337037 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:11:19 crc kubenswrapper[4822]: I0224 09:11:19.337746 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:11:19 crc kubenswrapper[4822]: I0224 09:11:19.339872 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 24 09:11:19 crc kubenswrapper[4822]: I0224 09:11:19.340772 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 24 09:11:19 crc kubenswrapper[4822]: I0224 09:11:19.341119 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 24 09:11:19 crc kubenswrapper[4822]: I0224 09:11:19.341146 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 24 09:11:19 crc kubenswrapper[4822]: I0224 09:11:19.341236 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 24 09:11:19 crc kubenswrapper[4822]: I0224 09:11:19.341259 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.016466 4822 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.101741 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-4qd6h"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.102502 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4qd6h" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.109788 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.112847 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-vhssf"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.113887 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-vhssf" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.115785 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.116168 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.121144 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.121263 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.121433 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.124561 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rw92h"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.125225 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rw92h" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.125341 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.125416 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.125602 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.126647 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2m726"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.127155 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2m726" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.128323 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-c6srg"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.128951 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-c6srg" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.131525 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.135898 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.136399 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8sr4g"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.136866 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-8sr4g" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.139644 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-r6nlr"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.140521 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r6nlr" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.142116 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.142363 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.142510 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d4kcw"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.143089 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d4kcw" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.154676 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vct48"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.155147 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.157014 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.157707 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.157795 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.157816 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.158430 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.158683 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.158896 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.159063 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.161061 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.161143 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.161242 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.161411 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.161439 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.161609 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.161629 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.161700 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.161709 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.161771 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.161799 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.161863 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.161972 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.162059 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.161734 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.162150 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.162163 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.162240 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.163308 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.163519 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.163672 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.163835 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.164015 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.164154 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.164279 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.165106 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.165394 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.165415 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.165590 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.165632 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.165598 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.165721 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.167231 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-csmqv"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.167876 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-csmqv" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.168359 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.168839 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.169124 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.172693 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.173001 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pqs2"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.173458 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.173661 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.173968 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-6shfw"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.174217 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fbzp9"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.174499 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-fbzp9" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.173987 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.174722 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pqs2" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.174947 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6shfw" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.175366 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.178990 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-dwqqh"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.179558 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-dwqqh" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.183791 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.184055 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.184169 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.184175 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.201840 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-z895m"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.202813 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-z895m" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.204613 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5430e137-6e26-43f5-bd31-7a2c83e9997c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-d4kcw\" (UID: \"5430e137-6e26-43f5-bd31-7a2c83e9997c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d4kcw" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.204644 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjbrm\" (UniqueName: \"kubernetes.io/projected/e1dc4f3f-07b4-40e4-b324-3bce1b98b132-kube-api-access-mjbrm\") pod \"apiserver-76f77b778f-vhssf\" (UID: \"e1dc4f3f-07b4-40e4-b324-3bce1b98b132\") " pod="openshift-apiserver/apiserver-76f77b778f-vhssf" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.204664 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf249\" (UniqueName: \"kubernetes.io/projected/d79f3f30-0efc-4f81-86a8-8a348431af9e-kube-api-access-vf249\") pod \"route-controller-manager-6576b87f9c-2m726\" (UID: \"d79f3f30-0efc-4f81-86a8-8a348431af9e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2m726" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.204688 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0ece2924-64f7-4738-9049-4ac6fa8dcd75-machine-approver-tls\") pod \"machine-approver-56656f9798-r6nlr\" (UID: \"0ece2924-64f7-4738-9049-4ac6fa8dcd75\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r6nlr" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.204707 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.204728 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52tsm\" (UniqueName: \"kubernetes.io/projected/751981f5-4bd9-42fd-888e-2407c6a197ca-kube-api-access-52tsm\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.204760 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d4f7e76-5492-4aff-ac8d-ead257d6d9d6-serving-cert\") pod \"openshift-config-operator-7777fb866f-csmqv\" (UID: \"8d4f7e76-5492-4aff-ac8d-ead257d6d9d6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-csmqv" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.204787 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h99ql\" (UniqueName: \"kubernetes.io/projected/0ece2924-64f7-4738-9049-4ac6fa8dcd75-kube-api-access-h99ql\") pod \"machine-approver-56656f9798-r6nlr\" (UID: \"0ece2924-64f7-4738-9049-4ac6fa8dcd75\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r6nlr" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.204806 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksfcv\" (UniqueName: \"kubernetes.io/projected/68d26dd2-7abb-404f-a8e5-869816830496-kube-api-access-ksfcv\") pod \"ingress-canary-4qd6h\" (UID: \"68d26dd2-7abb-404f-a8e5-869816830496\") " pod="openshift-ingress-canary/ingress-canary-4qd6h" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.204823 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.204843 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0ece2924-64f7-4738-9049-4ac6fa8dcd75-auth-proxy-config\") pod \"machine-approver-56656f9798-r6nlr\" (UID: \"0ece2924-64f7-4738-9049-4ac6fa8dcd75\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r6nlr" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.204861 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02782b47-e00a-4585-9f89-4fe9585931e5-serving-cert\") pod \"controller-manager-879f6c89f-rw92h\" (UID: \"02782b47-e00a-4585-9f89-4fe9585931e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rw92h" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.204879 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l84s\" (UniqueName: \"kubernetes.io/projected/8d4f7e76-5492-4aff-ac8d-ead257d6d9d6-kube-api-access-8l84s\") pod \"openshift-config-operator-7777fb866f-csmqv\" (UID: \"8d4f7e76-5492-4aff-ac8d-ead257d6d9d6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-csmqv" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.204896 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/68d26dd2-7abb-404f-a8e5-869816830496-cert\") pod \"ingress-canary-4qd6h\" (UID: \"68d26dd2-7abb-404f-a8e5-869816830496\") " pod="openshift-ingress-canary/ingress-canary-4qd6h" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.204929 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5430e137-6e26-43f5-bd31-7a2c83e9997c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-d4kcw\" (UID: \"5430e137-6e26-43f5-bd31-7a2c83e9997c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d4kcw" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.204954 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj7x7\" (UniqueName: \"kubernetes.io/projected/5430e137-6e26-43f5-bd31-7a2c83e9997c-kube-api-access-wj7x7\") pod \"openshift-apiserver-operator-796bbdcf4f-d4kcw\" (UID: \"5430e137-6e26-43f5-bd31-7a2c83e9997c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d4kcw" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.204972 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.204992 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e1dc4f3f-07b4-40e4-b324-3bce1b98b132-node-pullsecrets\") pod \"apiserver-76f77b778f-vhssf\" (UID: \"e1dc4f3f-07b4-40e4-b324-3bce1b98b132\") " pod="openshift-apiserver/apiserver-76f77b778f-vhssf" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.205012 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d79f3f30-0efc-4f81-86a8-8a348431af9e-serving-cert\") pod \"route-controller-manager-6576b87f9c-2m726\" (UID: \"d79f3f30-0efc-4f81-86a8-8a348431af9e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2m726" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.205032 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1dc4f3f-07b4-40e4-b324-3bce1b98b132-serving-cert\") pod \"apiserver-76f77b778f-vhssf\" (UID: \"e1dc4f3f-07b4-40e4-b324-3bce1b98b132\") " pod="openshift-apiserver/apiserver-76f77b778f-vhssf" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.205047 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d79f3f30-0efc-4f81-86a8-8a348431af9e-config\") pod \"route-controller-manager-6576b87f9c-2m726\" (UID: \"d79f3f30-0efc-4f81-86a8-8a348431af9e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2m726" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.205067 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5tz7\" (UniqueName: \"kubernetes.io/projected/02782b47-e00a-4585-9f89-4fe9585931e5-kube-api-access-f5tz7\") pod \"controller-manager-879f6c89f-rw92h\" (UID: \"02782b47-e00a-4585-9f89-4fe9585931e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rw92h" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.205089 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f1841b8-2101-467d-9d66-f4a372088e0c-config\") pod \"authentication-operator-69f744f599-8sr4g\" (UID: \"5f1841b8-2101-467d-9d66-f4a372088e0c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8sr4g" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.205118 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.205139 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.205158 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e1dc4f3f-07b4-40e4-b324-3bce1b98b132-image-import-ca\") pod \"apiserver-76f77b778f-vhssf\" (UID: \"e1dc4f3f-07b4-40e4-b324-3bce1b98b132\") " pod="openshift-apiserver/apiserver-76f77b778f-vhssf" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.205175 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/751981f5-4bd9-42fd-888e-2407c6a197ca-audit-policies\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.205192 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8d4f7e76-5492-4aff-ac8d-ead257d6d9d6-available-featuregates\") pod \"openshift-config-operator-7777fb866f-csmqv\" (UID: \"8d4f7e76-5492-4aff-ac8d-ead257d6d9d6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-csmqv" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.205211 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1dc4f3f-07b4-40e4-b324-3bce1b98b132-config\") pod \"apiserver-76f77b778f-vhssf\" (UID: \"e1dc4f3f-07b4-40e4-b324-3bce1b98b132\") " pod="openshift-apiserver/apiserver-76f77b778f-vhssf" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.205229 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1dc4f3f-07b4-40e4-b324-3bce1b98b132-trusted-ca-bundle\") pod \"apiserver-76f77b778f-vhssf\" (UID: \"e1dc4f3f-07b4-40e4-b324-3bce1b98b132\") " pod="openshift-apiserver/apiserver-76f77b778f-vhssf" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.205248 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f1841b8-2101-467d-9d66-f4a372088e0c-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8sr4g\" (UID: \"5f1841b8-2101-467d-9d66-f4a372088e0c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8sr4g" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.205264 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e1dc4f3f-07b4-40e4-b324-3bce1b98b132-audit-dir\") pod \"apiserver-76f77b778f-vhssf\" (UID: \"e1dc4f3f-07b4-40e4-b324-3bce1b98b132\") " pod="openshift-apiserver/apiserver-76f77b778f-vhssf" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.205282 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02782b47-e00a-4585-9f89-4fe9585931e5-config\") pod \"controller-manager-879f6c89f-rw92h\" (UID: \"02782b47-e00a-4585-9f89-4fe9585931e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rw92h" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.205302 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.205329 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/751981f5-4bd9-42fd-888e-2407c6a197ca-audit-dir\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.205349 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02782b47-e00a-4585-9f89-4fe9585931e5-client-ca\") pod \"controller-manager-879f6c89f-rw92h\" (UID: \"02782b47-e00a-4585-9f89-4fe9585931e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rw92h" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.205339 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dqq5s"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.205736 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.205365 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e1dc4f3f-07b4-40e4-b324-3bce1b98b132-encryption-config\") pod \"apiserver-76f77b778f-vhssf\" (UID: \"e1dc4f3f-07b4-40e4-b324-3bce1b98b132\") " pod="openshift-apiserver/apiserver-76f77b778f-vhssf" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.216578 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dqq5s" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.216902 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.206016 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e83cbf09-f579-43b8-b1f5-bf43c477d342-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-c6srg\" (UID: \"e83cbf09-f579-43b8-b1f5-bf43c477d342\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-c6srg" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.217136 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldtms\" (UniqueName: \"kubernetes.io/projected/5f1841b8-2101-467d-9d66-f4a372088e0c-kube-api-access-ldtms\") pod \"authentication-operator-69f744f599-8sr4g\" (UID: \"5f1841b8-2101-467d-9d66-f4a372088e0c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8sr4g" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.217152 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.217169 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.217186 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e1dc4f3f-07b4-40e4-b324-3bce1b98b132-etcd-serving-ca\") pod \"apiserver-76f77b778f-vhssf\" (UID: \"e1dc4f3f-07b4-40e4-b324-3bce1b98b132\") " pod="openshift-apiserver/apiserver-76f77b778f-vhssf" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.217201 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f1841b8-2101-467d-9d66-f4a372088e0c-service-ca-bundle\") pod \"authentication-operator-69f744f599-8sr4g\" (UID: \"5f1841b8-2101-467d-9d66-f4a372088e0c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8sr4g" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.217216 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e1dc4f3f-07b4-40e4-b324-3bce1b98b132-etcd-client\") pod \"apiserver-76f77b778f-vhssf\" (UID: \"e1dc4f3f-07b4-40e4-b324-3bce1b98b132\") " pod="openshift-apiserver/apiserver-76f77b778f-vhssf" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.217232 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e83cbf09-f579-43b8-b1f5-bf43c477d342-images\") pod \"machine-api-operator-5694c8668f-c6srg\" (UID: \"e83cbf09-f579-43b8-b1f5-bf43c477d342\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-c6srg" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.217245 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ece2924-64f7-4738-9049-4ac6fa8dcd75-config\") pod \"machine-approver-56656f9798-r6nlr\" (UID: \"0ece2924-64f7-4738-9049-4ac6fa8dcd75\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r6nlr" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.217262 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e1dc4f3f-07b4-40e4-b324-3bce1b98b132-audit\") pod \"apiserver-76f77b778f-vhssf\" (UID: \"e1dc4f3f-07b4-40e4-b324-3bce1b98b132\") " pod="openshift-apiserver/apiserver-76f77b778f-vhssf" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.217289 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.217304 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e83cbf09-f579-43b8-b1f5-bf43c477d342-config\") pod \"machine-api-operator-5694c8668f-c6srg\" (UID: \"e83cbf09-f579-43b8-b1f5-bf43c477d342\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-c6srg" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.217317 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.217336 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02782b47-e00a-4585-9f89-4fe9585931e5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-rw92h\" (UID: \"02782b47-e00a-4585-9f89-4fe9585931e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rw92h" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.217350 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69xlf\" (UniqueName: \"kubernetes.io/projected/e83cbf09-f579-43b8-b1f5-bf43c477d342-kube-api-access-69xlf\") pod \"machine-api-operator-5694c8668f-c6srg\" (UID: \"e83cbf09-f579-43b8-b1f5-bf43c477d342\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-c6srg" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.217364 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f1841b8-2101-467d-9d66-f4a372088e0c-serving-cert\") pod \"authentication-operator-69f744f599-8sr4g\" (UID: \"5f1841b8-2101-467d-9d66-f4a372088e0c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8sr4g" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.217377 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d79f3f30-0efc-4f81-86a8-8a348431af9e-client-ca\") pod \"route-controller-manager-6576b87f9c-2m726\" (UID: \"d79f3f30-0efc-4f81-86a8-8a348431af9e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2m726" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.217393 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.219999 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.220119 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vbcpz"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.220146 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.220271 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.220383 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.220439 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.220468 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-bczvj"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.220555 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.220665 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.220698 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.220709 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.220786 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.220820 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-rtmmj"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.220959 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.221045 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vbcpz" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.221056 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.221349 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.221482 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.221580 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.220667 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.221719 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.222099 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-bczvj" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.225306 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cjcmh"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.225618 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.225824 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rtmmj" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.229319 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.236290 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.236843 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.236980 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.237813 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.237979 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.237990 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.238111 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.238333 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.238548 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.238762 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.238932 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.239132 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.239998 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.240293 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4qd6h"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.240680 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tqmq8"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.241053 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.241129 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tqmq8" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.242474 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sb8fk"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.243183 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sb8fk" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.243741 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.244557 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.246408 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.246852 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.247772 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4lh4k"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.252297 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4lh4k" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.257479 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5zmb"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.261506 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jr8gq"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.261864 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5zmb" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.262386 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jr8gq" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.263893 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wtvhk"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.264675 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wtvhk" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.268180 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-xchvv"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.268849 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xchvv" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.277712 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.284552 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.288364 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-5nmd4"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.289167 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-5nmd4" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.289989 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-5tb67"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.290534 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4vkrp"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.291053 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4vkrp" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.291240 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5tb67" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.291291 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-5lckc"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.291865 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5lckc" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.293417 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-czbsv"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.293927 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-czbsv" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.294033 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8sr4g"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.304191 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rw92h"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.306338 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjhzd"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.306862 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cdns9"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.307176 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-csmqv"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.307260 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cdns9" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.307527 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjhzd" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.308305 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-c6srg"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.309377 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.318659 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.318806 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02782b47-e00a-4585-9f89-4fe9585931e5-config\") pod \"controller-manager-879f6c89f-rw92h\" (UID: \"02782b47-e00a-4585-9f89-4fe9585931e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rw92h" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.318832 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.318855 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b6b8486-799e-43df-beec-2c086d6411d1-config\") pod \"etcd-operator-b45778765-dwqqh\" (UID: \"2b6b8486-799e-43df-beec-2c086d6411d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dwqqh" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.318871 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/751981f5-4bd9-42fd-888e-2407c6a197ca-audit-dir\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.318888 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d179d56-8a1c-41b7-aa7d-6ef20b61c2e7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tqmq8\" (UID: \"9d179d56-8a1c-41b7-aa7d-6ef20b61c2e7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tqmq8" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.318903 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckj2s\" (UniqueName: \"kubernetes.io/projected/8f691718-37cb-46da-aece-6173ba2ad129-kube-api-access-ckj2s\") pod \"ingress-operator-5b745b69d9-rtmmj\" (UID: \"8f691718-37cb-46da-aece-6173ba2ad129\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rtmmj" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.318937 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e1dc4f3f-07b4-40e4-b324-3bce1b98b132-encryption-config\") pod \"apiserver-76f77b778f-vhssf\" (UID: \"e1dc4f3f-07b4-40e4-b324-3bce1b98b132\") " pod="openshift-apiserver/apiserver-76f77b778f-vhssf" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.318954 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02782b47-e00a-4585-9f89-4fe9585931e5-client-ca\") pod \"controller-manager-879f6c89f-rw92h\" (UID: \"02782b47-e00a-4585-9f89-4fe9585931e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rw92h" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.318969 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9d179d56-8a1c-41b7-aa7d-6ef20b61c2e7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tqmq8\" (UID: \"9d179d56-8a1c-41b7-aa7d-6ef20b61c2e7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tqmq8" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.318985 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24f77631-43b1-41e9-81ad-93134998b71e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-4lh4k\" (UID: \"24f77631-43b1-41e9-81ad-93134998b71e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4lh4k" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.319003 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e83cbf09-f579-43b8-b1f5-bf43c477d342-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-c6srg\" (UID: \"e83cbf09-f579-43b8-b1f5-bf43c477d342\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-c6srg" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.319000 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532060-nsnkd"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.319668 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532060-nsnkd" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.319019 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/159468c1-5e81-4bd2-8696-32a9c896b2ba-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-dqq5s\" (UID: \"159468c1-5e81-4bd2-8696-32a9c896b2ba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dqq5s" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.320987 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02782b47-e00a-4585-9f89-4fe9585931e5-client-ca\") pod \"controller-manager-879f6c89f-rw92h\" (UID: \"02782b47-e00a-4585-9f89-4fe9585931e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rw92h" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.322956 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/751981f5-4bd9-42fd-888e-2407c6a197ca-audit-dir\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.323267 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4rnpp"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.323904 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4rnpp" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.324234 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-54fq8"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.324812 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02782b47-e00a-4585-9f89-4fe9585931e5-config\") pod \"controller-manager-879f6c89f-rw92h\" (UID: \"02782b47-e00a-4585-9f89-4fe9585931e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rw92h" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.325129 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldtms\" (UniqueName: \"kubernetes.io/projected/5f1841b8-2101-467d-9d66-f4a372088e0c-kube-api-access-ldtms\") pod \"authentication-operator-69f744f599-8sr4g\" (UID: \"5f1841b8-2101-467d-9d66-f4a372088e0c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8sr4g" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.325186 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.325209 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.325228 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb9x2\" (UniqueName: \"kubernetes.io/projected/24f77631-43b1-41e9-81ad-93134998b71e-kube-api-access-bb9x2\") pod \"kube-storage-version-migrator-operator-b67b599dd-4lh4k\" (UID: \"24f77631-43b1-41e9-81ad-93134998b71e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4lh4k" Feb 24 09:11:23 crc kubenswrapper[4822]: E0224 09:11:23.325285 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:13:25.325270736 +0000 UTC m=+327.713033284 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.325306 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e1dc4f3f-07b4-40e4-b324-3bce1b98b132-etcd-serving-ca\") pod \"apiserver-76f77b778f-vhssf\" (UID: \"e1dc4f3f-07b4-40e4-b324-3bce1b98b132\") " pod="openshift-apiserver/apiserver-76f77b778f-vhssf" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.325324 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f1841b8-2101-467d-9d66-f4a372088e0c-service-ca-bundle\") pod \"authentication-operator-69f744f599-8sr4g\" (UID: \"5f1841b8-2101-467d-9d66-f4a372088e0c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8sr4g" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.326213 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e1dc4f3f-07b4-40e4-b324-3bce1b98b132-etcd-serving-ca\") pod \"apiserver-76f77b778f-vhssf\" (UID: \"e1dc4f3f-07b4-40e4-b324-3bce1b98b132\") " pod="openshift-apiserver/apiserver-76f77b778f-vhssf" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.326609 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-54fq8" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.327061 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mv4t2"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.327176 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e1dc4f3f-07b4-40e4-b324-3bce1b98b132-etcd-client\") pod \"apiserver-76f77b778f-vhssf\" (UID: \"e1dc4f3f-07b4-40e4-b324-3bce1b98b132\") " pod="openshift-apiserver/apiserver-76f77b778f-vhssf" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.327297 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h94mg\" (UniqueName: \"kubernetes.io/projected/68be5d27-605e-4f51-acaf-5e97915dd673-kube-api-access-h94mg\") pod \"machine-config-operator-74547568cd-xchvv\" (UID: \"68be5d27-605e-4f51-acaf-5e97915dd673\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xchvv" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.327624 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e83cbf09-f579-43b8-b1f5-bf43c477d342-images\") pod \"machine-api-operator-5694c8668f-c6srg\" (UID: \"e83cbf09-f579-43b8-b1f5-bf43c477d342\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-c6srg" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.327681 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ece2924-64f7-4738-9049-4ac6fa8dcd75-config\") pod \"machine-approver-56656f9798-r6nlr\" (UID: \"0ece2924-64f7-4738-9049-4ac6fa8dcd75\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r6nlr" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.327713 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk7wb\" (UniqueName: \"kubernetes.io/projected/2b6b8486-799e-43df-beec-2c086d6411d1-kube-api-access-qk7wb\") pod \"etcd-operator-b45778765-dwqqh\" (UID: \"2b6b8486-799e-43df-beec-2c086d6411d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dwqqh" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.327739 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mv4t2" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.327848 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-vhssf"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.328230 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e1dc4f3f-07b4-40e4-b324-3bce1b98b132-audit\") pod \"apiserver-76f77b778f-vhssf\" (UID: \"e1dc4f3f-07b4-40e4-b324-3bce1b98b132\") " pod="openshift-apiserver/apiserver-76f77b778f-vhssf" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.328290 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/68be5d27-605e-4f51-acaf-5e97915dd673-proxy-tls\") pod \"machine-config-operator-74547568cd-xchvv\" (UID: \"68be5d27-605e-4f51-acaf-5e97915dd673\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xchvv" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.328406 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ece2924-64f7-4738-9049-4ac6fa8dcd75-config\") pod \"machine-approver-56656f9798-r6nlr\" (UID: \"0ece2924-64f7-4738-9049-4ac6fa8dcd75\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r6nlr" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.328695 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.328744 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.329193 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e1dc4f3f-07b4-40e4-b324-3bce1b98b132-audit\") pod \"apiserver-76f77b778f-vhssf\" (UID: \"e1dc4f3f-07b4-40e4-b324-3bce1b98b132\") " pod="openshift-apiserver/apiserver-76f77b778f-vhssf" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.329606 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e83cbf09-f579-43b8-b1f5-bf43c477d342-config\") pod \"machine-api-operator-5694c8668f-c6srg\" (UID: \"e83cbf09-f579-43b8-b1f5-bf43c477d342\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-c6srg" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.329641 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.329658 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.329692 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02782b47-e00a-4585-9f89-4fe9585931e5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-rw92h\" (UID: \"02782b47-e00a-4585-9f89-4fe9585931e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rw92h" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.329700 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e83cbf09-f579-43b8-b1f5-bf43c477d342-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-c6srg\" (UID: \"e83cbf09-f579-43b8-b1f5-bf43c477d342\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-c6srg" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.329729 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69xlf\" (UniqueName: \"kubernetes.io/projected/e83cbf09-f579-43b8-b1f5-bf43c477d342-kube-api-access-69xlf\") pod \"machine-api-operator-5694c8668f-c6srg\" (UID: \"e83cbf09-f579-43b8-b1f5-bf43c477d342\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-c6srg" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.329748 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f1841b8-2101-467d-9d66-f4a372088e0c-serving-cert\") pod \"authentication-operator-69f744f599-8sr4g\" (UID: \"5f1841b8-2101-467d-9d66-f4a372088e0c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8sr4g" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.329767 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d79f3f30-0efc-4f81-86a8-8a348431af9e-client-ca\") pod \"route-controller-manager-6576b87f9c-2m726\" (UID: \"d79f3f30-0efc-4f81-86a8-8a348431af9e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2m726" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.329783 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f691718-37cb-46da-aece-6173ba2ad129-trusted-ca\") pod \"ingress-operator-5b745b69d9-rtmmj\" (UID: \"8f691718-37cb-46da-aece-6173ba2ad129\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rtmmj" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.329797 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/68be5d27-605e-4f51-acaf-5e97915dd673-auth-proxy-config\") pod \"machine-config-operator-74547568cd-xchvv\" (UID: \"68be5d27-605e-4f51-acaf-5e97915dd673\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xchvv" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.329819 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e83cbf09-f579-43b8-b1f5-bf43c477d342-images\") pod \"machine-api-operator-5694c8668f-c6srg\" (UID: \"e83cbf09-f579-43b8-b1f5-bf43c477d342\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-c6srg" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.329844 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.329865 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5430e137-6e26-43f5-bd31-7a2c83e9997c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-d4kcw\" (UID: \"5430e137-6e26-43f5-bd31-7a2c83e9997c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d4kcw" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.329898 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24f77631-43b1-41e9-81ad-93134998b71e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-4lh4k\" (UID: \"24f77631-43b1-41e9-81ad-93134998b71e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4lh4k" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.329935 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjbrm\" (UniqueName: \"kubernetes.io/projected/e1dc4f3f-07b4-40e4-b324-3bce1b98b132-kube-api-access-mjbrm\") pod \"apiserver-76f77b778f-vhssf\" (UID: \"e1dc4f3f-07b4-40e4-b324-3bce1b98b132\") " pod="openshift-apiserver/apiserver-76f77b778f-vhssf" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.329957 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf249\" (UniqueName: \"kubernetes.io/projected/d79f3f30-0efc-4f81-86a8-8a348431af9e-kube-api-access-vf249\") pod \"route-controller-manager-6576b87f9c-2m726\" (UID: \"d79f3f30-0efc-4f81-86a8-8a348431af9e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2m726" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.329979 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m42l\" (UniqueName: \"kubernetes.io/projected/159468c1-5e81-4bd2-8696-32a9c896b2ba-kube-api-access-5m42l\") pod \"cluster-image-registry-operator-dc59b4c8b-dqq5s\" (UID: \"159468c1-5e81-4bd2-8696-32a9c896b2ba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dqq5s" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.330005 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0ece2924-64f7-4738-9049-4ac6fa8dcd75-machine-approver-tls\") pod \"machine-approver-56656f9798-r6nlr\" (UID: \"0ece2924-64f7-4738-9049-4ac6fa8dcd75\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r6nlr" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.330024 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.330046 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52tsm\" (UniqueName: \"kubernetes.io/projected/751981f5-4bd9-42fd-888e-2407c6a197ca-kube-api-access-52tsm\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.330532 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.330573 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.330615 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d4f7e76-5492-4aff-ac8d-ead257d6d9d6-serving-cert\") pod \"openshift-config-operator-7777fb866f-csmqv\" (UID: \"8d4f7e76-5492-4aff-ac8d-ead257d6d9d6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-csmqv" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.330634 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.330650 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h99ql\" (UniqueName: \"kubernetes.io/projected/0ece2924-64f7-4738-9049-4ac6fa8dcd75-kube-api-access-h99ql\") pod \"machine-approver-56656f9798-r6nlr\" (UID: \"0ece2924-64f7-4738-9049-4ac6fa8dcd75\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r6nlr" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.330667 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d179d56-8a1c-41b7-aa7d-6ef20b61c2e7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tqmq8\" (UID: \"9d179d56-8a1c-41b7-aa7d-6ef20b61c2e7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tqmq8" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.330683 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/159468c1-5e81-4bd2-8696-32a9c896b2ba-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-dqq5s\" (UID: \"159468c1-5e81-4bd2-8696-32a9c896b2ba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dqq5s" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.330736 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsmnb\" (UniqueName: \"kubernetes.io/projected/8cd11009-44d6-4539-b702-958f388fc85e-kube-api-access-lsmnb\") pod \"downloads-7954f5f757-z895m\" (UID: \"8cd11009-44d6-4539-b702-958f388fc85e\") " pod="openshift-console/downloads-7954f5f757-z895m" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.330752 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2b6b8486-799e-43df-beec-2c086d6411d1-etcd-client\") pod \"etcd-operator-b45778765-dwqqh\" (UID: \"2b6b8486-799e-43df-beec-2c086d6411d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dwqqh" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.330769 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksfcv\" (UniqueName: \"kubernetes.io/projected/68d26dd2-7abb-404f-a8e5-869816830496-kube-api-access-ksfcv\") pod \"ingress-canary-4qd6h\" (UID: \"68d26dd2-7abb-404f-a8e5-869816830496\") " pod="openshift-ingress-canary/ingress-canary-4qd6h" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.330786 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.330803 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b6b8486-799e-43df-beec-2c086d6411d1-serving-cert\") pod \"etcd-operator-b45778765-dwqqh\" (UID: \"2b6b8486-799e-43df-beec-2c086d6411d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dwqqh" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.330820 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0ece2924-64f7-4738-9049-4ac6fa8dcd75-auth-proxy-config\") pod \"machine-approver-56656f9798-r6nlr\" (UID: \"0ece2924-64f7-4738-9049-4ac6fa8dcd75\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r6nlr" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.330840 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f691718-37cb-46da-aece-6173ba2ad129-bound-sa-token\") pod \"ingress-operator-5b745b69d9-rtmmj\" (UID: \"8f691718-37cb-46da-aece-6173ba2ad129\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rtmmj" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.330854 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/68be5d27-605e-4f51-acaf-5e97915dd673-images\") pod \"machine-config-operator-74547568cd-xchvv\" (UID: \"68be5d27-605e-4f51-acaf-5e97915dd673\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xchvv" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.330871 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02782b47-e00a-4585-9f89-4fe9585931e5-serving-cert\") pod \"controller-manager-879f6c89f-rw92h\" (UID: \"02782b47-e00a-4585-9f89-4fe9585931e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rw92h" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.330984 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj7x7\" (UniqueName: \"kubernetes.io/projected/5430e137-6e26-43f5-bd31-7a2c83e9997c-kube-api-access-wj7x7\") pod \"openshift-apiserver-operator-796bbdcf4f-d4kcw\" (UID: \"5430e137-6e26-43f5-bd31-7a2c83e9997c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d4kcw" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.331018 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l84s\" (UniqueName: \"kubernetes.io/projected/8d4f7e76-5492-4aff-ac8d-ead257d6d9d6-kube-api-access-8l84s\") pod \"openshift-config-operator-7777fb866f-csmqv\" (UID: \"8d4f7e76-5492-4aff-ac8d-ead257d6d9d6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-csmqv" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.331033 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/68d26dd2-7abb-404f-a8e5-869816830496-cert\") pod \"ingress-canary-4qd6h\" (UID: \"68d26dd2-7abb-404f-a8e5-869816830496\") " pod="openshift-ingress-canary/ingress-canary-4qd6h" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.331047 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5430e137-6e26-43f5-bd31-7a2c83e9997c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-d4kcw\" (UID: \"5430e137-6e26-43f5-bd31-7a2c83e9997c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d4kcw" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.331062 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.331078 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8f691718-37cb-46da-aece-6173ba2ad129-metrics-tls\") pod \"ingress-operator-5b745b69d9-rtmmj\" (UID: \"8f691718-37cb-46da-aece-6173ba2ad129\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rtmmj" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.331097 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e1dc4f3f-07b4-40e4-b324-3bce1b98b132-node-pullsecrets\") pod \"apiserver-76f77b778f-vhssf\" (UID: \"e1dc4f3f-07b4-40e4-b324-3bce1b98b132\") " pod="openshift-apiserver/apiserver-76f77b778f-vhssf" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.331111 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d79f3f30-0efc-4f81-86a8-8a348431af9e-serving-cert\") pod \"route-controller-manager-6576b87f9c-2m726\" (UID: \"d79f3f30-0efc-4f81-86a8-8a348431af9e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2m726" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.331126 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2b6b8486-799e-43df-beec-2c086d6411d1-etcd-ca\") pod \"etcd-operator-b45778765-dwqqh\" (UID: \"2b6b8486-799e-43df-beec-2c086d6411d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dwqqh" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.331142 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1dc4f3f-07b4-40e4-b324-3bce1b98b132-serving-cert\") pod \"apiserver-76f77b778f-vhssf\" (UID: \"e1dc4f3f-07b4-40e4-b324-3bce1b98b132\") " pod="openshift-apiserver/apiserver-76f77b778f-vhssf" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.331158 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d79f3f30-0efc-4f81-86a8-8a348431af9e-config\") pod \"route-controller-manager-6576b87f9c-2m726\" (UID: \"d79f3f30-0efc-4f81-86a8-8a348431af9e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2m726" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.331185 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5tz7\" (UniqueName: \"kubernetes.io/projected/02782b47-e00a-4585-9f89-4fe9585931e5-kube-api-access-f5tz7\") pod \"controller-manager-879f6c89f-rw92h\" (UID: \"02782b47-e00a-4585-9f89-4fe9585931e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rw92h" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.331199 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f1841b8-2101-467d-9d66-f4a372088e0c-config\") pod \"authentication-operator-69f744f599-8sr4g\" (UID: \"5f1841b8-2101-467d-9d66-f4a372088e0c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8sr4g" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.331216 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2b6b8486-799e-43df-beec-2c086d6411d1-etcd-service-ca\") pod \"etcd-operator-b45778765-dwqqh\" (UID: \"2b6b8486-799e-43df-beec-2c086d6411d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dwqqh" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.331233 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.331373 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.331391 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.331406 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/159468c1-5e81-4bd2-8696-32a9c896b2ba-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-dqq5s\" (UID: \"159468c1-5e81-4bd2-8696-32a9c896b2ba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dqq5s" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.331425 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e1dc4f3f-07b4-40e4-b324-3bce1b98b132-image-import-ca\") pod \"apiserver-76f77b778f-vhssf\" (UID: \"e1dc4f3f-07b4-40e4-b324-3bce1b98b132\") " pod="openshift-apiserver/apiserver-76f77b778f-vhssf" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.331439 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/751981f5-4bd9-42fd-888e-2407c6a197ca-audit-policies\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.331472 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8d4f7e76-5492-4aff-ac8d-ead257d6d9d6-available-featuregates\") pod \"openshift-config-operator-7777fb866f-csmqv\" (UID: \"8d4f7e76-5492-4aff-ac8d-ead257d6d9d6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-csmqv" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.331486 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1dc4f3f-07b4-40e4-b324-3bce1b98b132-config\") pod \"apiserver-76f77b778f-vhssf\" (UID: \"e1dc4f3f-07b4-40e4-b324-3bce1b98b132\") " pod="openshift-apiserver/apiserver-76f77b778f-vhssf" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.331500 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1dc4f3f-07b4-40e4-b324-3bce1b98b132-trusted-ca-bundle\") pod \"apiserver-76f77b778f-vhssf\" (UID: \"e1dc4f3f-07b4-40e4-b324-3bce1b98b132\") " pod="openshift-apiserver/apiserver-76f77b778f-vhssf" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.331515 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f1841b8-2101-467d-9d66-f4a372088e0c-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8sr4g\" (UID: \"5f1841b8-2101-467d-9d66-f4a372088e0c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8sr4g" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.331531 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e1dc4f3f-07b4-40e4-b324-3bce1b98b132-audit-dir\") pod \"apiserver-76f77b778f-vhssf\" (UID: \"e1dc4f3f-07b4-40e4-b324-3bce1b98b132\") " pod="openshift-apiserver/apiserver-76f77b778f-vhssf" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.331596 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e1dc4f3f-07b4-40e4-b324-3bce1b98b132-audit-dir\") pod \"apiserver-76f77b778f-vhssf\" (UID: \"e1dc4f3f-07b4-40e4-b324-3bce1b98b132\") " pod="openshift-apiserver/apiserver-76f77b778f-vhssf" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.332163 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02782b47-e00a-4585-9f89-4fe9585931e5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-rw92h\" (UID: \"02782b47-e00a-4585-9f89-4fe9585931e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rw92h" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.332625 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f1841b8-2101-467d-9d66-f4a372088e0c-service-ca-bundle\") pod \"authentication-operator-69f744f599-8sr4g\" (UID: \"5f1841b8-2101-467d-9d66-f4a372088e0c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8sr4g" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.332668 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.333215 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0ece2924-64f7-4738-9049-4ac6fa8dcd75-auth-proxy-config\") pod \"machine-approver-56656f9798-r6nlr\" (UID: \"0ece2924-64f7-4738-9049-4ac6fa8dcd75\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r6nlr" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.335924 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.336036 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.337205 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5430e137-6e26-43f5-bd31-7a2c83e9997c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-d4kcw\" (UID: \"5430e137-6e26-43f5-bd31-7a2c83e9997c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d4kcw" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.337493 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pqs2"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.337539 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-bm7pg"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.337797 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e1dc4f3f-07b4-40e4-b324-3bce1b98b132-node-pullsecrets\") pod \"apiserver-76f77b778f-vhssf\" (UID: \"e1dc4f3f-07b4-40e4-b324-3bce1b98b132\") " pod="openshift-apiserver/apiserver-76f77b778f-vhssf" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.337971 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5nf7"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.338313 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5nf7" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.338529 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-bm7pg" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.338580 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5430e137-6e26-43f5-bd31-7a2c83e9997c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-d4kcw\" (UID: \"5430e137-6e26-43f5-bd31-7a2c83e9997c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d4kcw" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.339095 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02782b47-e00a-4585-9f89-4fe9585931e5-serving-cert\") pod \"controller-manager-879f6c89f-rw92h\" (UID: \"02782b47-e00a-4585-9f89-4fe9585931e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rw92h" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.339420 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0ece2924-64f7-4738-9049-4ac6fa8dcd75-machine-approver-tls\") pod \"machine-approver-56656f9798-r6nlr\" (UID: \"0ece2924-64f7-4738-9049-4ac6fa8dcd75\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r6nlr" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.340224 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d79f3f30-0efc-4f81-86a8-8a348431af9e-client-ca\") pod \"route-controller-manager-6576b87f9c-2m726\" (UID: \"d79f3f30-0efc-4f81-86a8-8a348431af9e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2m726" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.340461 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.341161 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e1dc4f3f-07b4-40e4-b324-3bce1b98b132-image-import-ca\") pod \"apiserver-76f77b778f-vhssf\" (UID: \"e1dc4f3f-07b4-40e4-b324-3bce1b98b132\") " pod="openshift-apiserver/apiserver-76f77b778f-vhssf" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.341524 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8d4f7e76-5492-4aff-ac8d-ead257d6d9d6-available-featuregates\") pod \"openshift-config-operator-7777fb866f-csmqv\" (UID: \"8d4f7e76-5492-4aff-ac8d-ead257d6d9d6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-csmqv" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.343090 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tqmq8"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.344113 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d79f3f30-0efc-4f81-86a8-8a348431af9e-config\") pod \"route-controller-manager-6576b87f9c-2m726\" (UID: \"d79f3f30-0efc-4f81-86a8-8a348431af9e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2m726" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.344534 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f1841b8-2101-467d-9d66-f4a372088e0c-config\") pod \"authentication-operator-69f744f599-8sr4g\" (UID: \"5f1841b8-2101-467d-9d66-f4a372088e0c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8sr4g" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.344829 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dqq5s"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.346155 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e83cbf09-f579-43b8-b1f5-bf43c477d342-config\") pod \"machine-api-operator-5694c8668f-c6srg\" (UID: \"e83cbf09-f579-43b8-b1f5-bf43c477d342\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-c6srg" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.346192 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.346486 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1dc4f3f-07b4-40e4-b324-3bce1b98b132-serving-cert\") pod \"apiserver-76f77b778f-vhssf\" (UID: \"e1dc4f3f-07b4-40e4-b324-3bce1b98b132\") " pod="openshift-apiserver/apiserver-76f77b778f-vhssf" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.346722 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1dc4f3f-07b4-40e4-b324-3bce1b98b132-config\") pod \"apiserver-76f77b778f-vhssf\" (UID: \"e1dc4f3f-07b4-40e4-b324-3bce1b98b132\") " pod="openshift-apiserver/apiserver-76f77b778f-vhssf" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.347185 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.347199 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/751981f5-4bd9-42fd-888e-2407c6a197ca-audit-policies\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.347260 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f1841b8-2101-467d-9d66-f4a372088e0c-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8sr4g\" (UID: \"5f1841b8-2101-467d-9d66-f4a372088e0c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8sr4g" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.347555 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.347563 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/68d26dd2-7abb-404f-a8e5-869816830496-cert\") pod \"ingress-canary-4qd6h\" (UID: \"68d26dd2-7abb-404f-a8e5-869816830496\") " pod="openshift-ingress-canary/ingress-canary-4qd6h" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.348363 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f1841b8-2101-467d-9d66-f4a372088e0c-serving-cert\") pod \"authentication-operator-69f744f599-8sr4g\" (UID: \"5f1841b8-2101-467d-9d66-f4a372088e0c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8sr4g" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.348338 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e1dc4f3f-07b4-40e4-b324-3bce1b98b132-encryption-config\") pod \"apiserver-76f77b778f-vhssf\" (UID: \"e1dc4f3f-07b4-40e4-b324-3bce1b98b132\") " pod="openshift-apiserver/apiserver-76f77b778f-vhssf" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.348646 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.348768 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.348793 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1dc4f3f-07b4-40e4-b324-3bce1b98b132-trusted-ca-bundle\") pod \"apiserver-76f77b778f-vhssf\" (UID: \"e1dc4f3f-07b4-40e4-b324-3bce1b98b132\") " pod="openshift-apiserver/apiserver-76f77b778f-vhssf" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.348997 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d79f3f30-0efc-4f81-86a8-8a348431af9e-serving-cert\") pod \"route-controller-manager-6576b87f9c-2m726\" (UID: \"d79f3f30-0efc-4f81-86a8-8a348431af9e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2m726" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.349749 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-6shfw"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.350662 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.350731 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wtvhk"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.354276 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vbcpz"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.354418 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2m726"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.355166 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-c9nn5"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.356170 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-c9nn5" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.357283 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-zqsnf"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.357743 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d4f7e76-5492-4aff-ac8d-ead257d6d9d6-serving-cert\") pod \"openshift-config-operator-7777fb866f-csmqv\" (UID: \"8d4f7e76-5492-4aff-ac8d-ead257d6d9d6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-csmqv" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.357886 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-zqsnf" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.359596 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5zmb"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.362620 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fbzp9"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.362652 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d4kcw"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.362662 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-xchvv"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.365036 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4vkrp"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.365110 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jr8gq"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.372396 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.372577 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.372722 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.373175 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.373290 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.373528 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e1dc4f3f-07b4-40e4-b324-3bce1b98b132-etcd-client\") pod \"apiserver-76f77b778f-vhssf\" (UID: \"e1dc4f3f-07b4-40e4-b324-3bce1b98b132\") " pod="openshift-apiserver/apiserver-76f77b778f-vhssf" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.373589 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.375866 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-dwqqh"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.377131 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cjcmh"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.379247 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-z895m"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.382159 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-bczvj"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.383983 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.384157 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532060-nsnkd"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.388185 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4lh4k"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.388244 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-54fq8"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.390640 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-rtmmj"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.391738 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vct48"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.394683 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-5nmd4"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.397422 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjhzd"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.401028 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5nf7"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.403232 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-bm7pg"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.408574 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.409116 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sb8fk"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.411010 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-c9nn5"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.411811 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mv4t2"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.412831 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-5tb67"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.414194 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cdns9"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.414989 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-czbsv"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.417093 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4rnpp"] Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.424599 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.432001 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m42l\" (UniqueName: \"kubernetes.io/projected/159468c1-5e81-4bd2-8696-32a9c896b2ba-kube-api-access-5m42l\") pod \"cluster-image-registry-operator-dc59b4c8b-dqq5s\" (UID: \"159468c1-5e81-4bd2-8696-32a9c896b2ba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dqq5s" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.432031 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24f77631-43b1-41e9-81ad-93134998b71e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-4lh4k\" (UID: \"24f77631-43b1-41e9-81ad-93134998b71e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4lh4k" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.432069 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d179d56-8a1c-41b7-aa7d-6ef20b61c2e7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tqmq8\" (UID: \"9d179d56-8a1c-41b7-aa7d-6ef20b61c2e7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tqmq8" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.432087 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/159468c1-5e81-4bd2-8696-32a9c896b2ba-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-dqq5s\" (UID: \"159468c1-5e81-4bd2-8696-32a9c896b2ba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dqq5s" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.432106 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsmnb\" (UniqueName: \"kubernetes.io/projected/8cd11009-44d6-4539-b702-958f388fc85e-kube-api-access-lsmnb\") pod \"downloads-7954f5f757-z895m\" (UID: \"8cd11009-44d6-4539-b702-958f388fc85e\") " pod="openshift-console/downloads-7954f5f757-z895m" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.432123 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2b6b8486-799e-43df-beec-2c086d6411d1-etcd-client\") pod \"etcd-operator-b45778765-dwqqh\" (UID: \"2b6b8486-799e-43df-beec-2c086d6411d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dwqqh" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.432139 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b6b8486-799e-43df-beec-2c086d6411d1-serving-cert\") pod \"etcd-operator-b45778765-dwqqh\" (UID: \"2b6b8486-799e-43df-beec-2c086d6411d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dwqqh" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.432162 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f691718-37cb-46da-aece-6173ba2ad129-bound-sa-token\") pod \"ingress-operator-5b745b69d9-rtmmj\" (UID: \"8f691718-37cb-46da-aece-6173ba2ad129\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rtmmj" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.432177 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/68be5d27-605e-4f51-acaf-5e97915dd673-images\") pod \"machine-config-operator-74547568cd-xchvv\" (UID: \"68be5d27-605e-4f51-acaf-5e97915dd673\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xchvv" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.432204 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8f691718-37cb-46da-aece-6173ba2ad129-metrics-tls\") pod \"ingress-operator-5b745b69d9-rtmmj\" (UID: \"8f691718-37cb-46da-aece-6173ba2ad129\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rtmmj" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.432221 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2b6b8486-799e-43df-beec-2c086d6411d1-etcd-ca\") pod \"etcd-operator-b45778765-dwqqh\" (UID: \"2b6b8486-799e-43df-beec-2c086d6411d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dwqqh" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.432243 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2b6b8486-799e-43df-beec-2c086d6411d1-etcd-service-ca\") pod \"etcd-operator-b45778765-dwqqh\" (UID: \"2b6b8486-799e-43df-beec-2c086d6411d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dwqqh" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.432259 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/159468c1-5e81-4bd2-8696-32a9c896b2ba-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-dqq5s\" (UID: \"159468c1-5e81-4bd2-8696-32a9c896b2ba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dqq5s" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.432279 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b6b8486-799e-43df-beec-2c086d6411d1-config\") pod \"etcd-operator-b45778765-dwqqh\" (UID: \"2b6b8486-799e-43df-beec-2c086d6411d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dwqqh" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.432294 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d179d56-8a1c-41b7-aa7d-6ef20b61c2e7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tqmq8\" (UID: \"9d179d56-8a1c-41b7-aa7d-6ef20b61c2e7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tqmq8" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.432311 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckj2s\" (UniqueName: \"kubernetes.io/projected/8f691718-37cb-46da-aece-6173ba2ad129-kube-api-access-ckj2s\") pod \"ingress-operator-5b745b69d9-rtmmj\" (UID: \"8f691718-37cb-46da-aece-6173ba2ad129\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rtmmj" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.432329 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9d179d56-8a1c-41b7-aa7d-6ef20b61c2e7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tqmq8\" (UID: \"9d179d56-8a1c-41b7-aa7d-6ef20b61c2e7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tqmq8" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.432360 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24f77631-43b1-41e9-81ad-93134998b71e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-4lh4k\" (UID: \"24f77631-43b1-41e9-81ad-93134998b71e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4lh4k" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.432378 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/159468c1-5e81-4bd2-8696-32a9c896b2ba-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-dqq5s\" (UID: \"159468c1-5e81-4bd2-8696-32a9c896b2ba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dqq5s" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.432398 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb9x2\" (UniqueName: \"kubernetes.io/projected/24f77631-43b1-41e9-81ad-93134998b71e-kube-api-access-bb9x2\") pod \"kube-storage-version-migrator-operator-b67b599dd-4lh4k\" (UID: \"24f77631-43b1-41e9-81ad-93134998b71e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4lh4k" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.432420 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h94mg\" (UniqueName: \"kubernetes.io/projected/68be5d27-605e-4f51-acaf-5e97915dd673-kube-api-access-h94mg\") pod \"machine-config-operator-74547568cd-xchvv\" (UID: \"68be5d27-605e-4f51-acaf-5e97915dd673\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xchvv" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.432438 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk7wb\" (UniqueName: \"kubernetes.io/projected/2b6b8486-799e-43df-beec-2c086d6411d1-kube-api-access-qk7wb\") pod \"etcd-operator-b45778765-dwqqh\" (UID: \"2b6b8486-799e-43df-beec-2c086d6411d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dwqqh" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.432454 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/68be5d27-605e-4f51-acaf-5e97915dd673-proxy-tls\") pod \"machine-config-operator-74547568cd-xchvv\" (UID: \"68be5d27-605e-4f51-acaf-5e97915dd673\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xchvv" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.432476 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f691718-37cb-46da-aece-6173ba2ad129-trusted-ca\") pod \"ingress-operator-5b745b69d9-rtmmj\" (UID: \"8f691718-37cb-46da-aece-6173ba2ad129\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rtmmj" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.432492 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/68be5d27-605e-4f51-acaf-5e97915dd673-auth-proxy-config\") pod \"machine-config-operator-74547568cd-xchvv\" (UID: \"68be5d27-605e-4f51-acaf-5e97915dd673\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xchvv" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.433272 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/68be5d27-605e-4f51-acaf-5e97915dd673-auth-proxy-config\") pod \"machine-config-operator-74547568cd-xchvv\" (UID: \"68be5d27-605e-4f51-acaf-5e97915dd673\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xchvv" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.433439 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2b6b8486-799e-43df-beec-2c086d6411d1-etcd-ca\") pod \"etcd-operator-b45778765-dwqqh\" (UID: \"2b6b8486-799e-43df-beec-2c086d6411d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dwqqh" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.433553 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2b6b8486-799e-43df-beec-2c086d6411d1-etcd-service-ca\") pod \"etcd-operator-b45778765-dwqqh\" (UID: \"2b6b8486-799e-43df-beec-2c086d6411d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dwqqh" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.433560 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b6b8486-799e-43df-beec-2c086d6411d1-config\") pod \"etcd-operator-b45778765-dwqqh\" (UID: \"2b6b8486-799e-43df-beec-2c086d6411d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dwqqh" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.433958 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/159468c1-5e81-4bd2-8696-32a9c896b2ba-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-dqq5s\" (UID: \"159468c1-5e81-4bd2-8696-32a9c896b2ba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dqq5s" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.434974 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2b6b8486-799e-43df-beec-2c086d6411d1-etcd-client\") pod \"etcd-operator-b45778765-dwqqh\" (UID: \"2b6b8486-799e-43df-beec-2c086d6411d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dwqqh" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.435013 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b6b8486-799e-43df-beec-2c086d6411d1-serving-cert\") pod \"etcd-operator-b45778765-dwqqh\" (UID: \"2b6b8486-799e-43df-beec-2c086d6411d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dwqqh" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.437022 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/159468c1-5e81-4bd2-8696-32a9c896b2ba-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-dqq5s\" (UID: \"159468c1-5e81-4bd2-8696-32a9c896b2ba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dqq5s" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.444276 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.464330 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.485510 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.505837 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.509668 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8f691718-37cb-46da-aece-6173ba2ad129-metrics-tls\") pod \"ingress-operator-5b745b69d9-rtmmj\" (UID: \"8f691718-37cb-46da-aece-6173ba2ad129\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rtmmj" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.526058 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.533960 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f51aff12-328f-4b79-8dbb-2079510f45dc-metrics-certs\") pod \"network-metrics-daemon-htbq4\" (UID: \"f51aff12-328f-4b79-8dbb-2079510f45dc\") " pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.537536 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f51aff12-328f-4b79-8dbb-2079510f45dc-metrics-certs\") pod \"network-metrics-daemon-htbq4\" (UID: \"f51aff12-328f-4b79-8dbb-2079510f45dc\") " pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.554581 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.565263 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f691718-37cb-46da-aece-6173ba2ad129-trusted-ca\") pod \"ingress-operator-5b745b69d9-rtmmj\" (UID: \"8f691718-37cb-46da-aece-6173ba2ad129\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rtmmj" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.565407 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.565438 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.580880 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htbq4" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.584938 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.596459 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.605335 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.611589 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.619562 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d179d56-8a1c-41b7-aa7d-6ef20b61c2e7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tqmq8\" (UID: \"9d179d56-8a1c-41b7-aa7d-6ef20b61c2e7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tqmq8" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.627654 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.645720 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.654174 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d179d56-8a1c-41b7-aa7d-6ef20b61c2e7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tqmq8\" (UID: \"9d179d56-8a1c-41b7-aa7d-6ef20b61c2e7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tqmq8" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.667028 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.687499 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.705617 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.725776 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.745580 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.769122 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.780281 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24f77631-43b1-41e9-81ad-93134998b71e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-4lh4k\" (UID: \"24f77631-43b1-41e9-81ad-93134998b71e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4lh4k" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.785233 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.808328 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.814325 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24f77631-43b1-41e9-81ad-93134998b71e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-4lh4k\" (UID: \"24f77631-43b1-41e9-81ad-93134998b71e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4lh4k" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.825410 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.864767 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.886467 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.893885 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-htbq4"] Feb 24 09:11:23 crc kubenswrapper[4822]: W0224 09:11:23.902505 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf51aff12_328f_4b79_8dbb_2079510f45dc.slice/crio-d1c1467768f38c175f7f24436181d75eb22c78b2b457735c2b164aa5b2a5a08c WatchSource:0}: Error finding container d1c1467768f38c175f7f24436181d75eb22c78b2b457735c2b164aa5b2a5a08c: Status 404 returned error can't find the container with id d1c1467768f38c175f7f24436181d75eb22c78b2b457735c2b164aa5b2a5a08c Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.906246 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.924788 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.945408 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.964585 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 24 09:11:23 crc kubenswrapper[4822]: I0224 09:11:23.995300 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.004616 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.024440 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.047160 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.065017 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.085661 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.105507 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 24 09:11:24 crc kubenswrapper[4822]: W0224 09:11:24.107675 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-9b51a157f607dd237b9cd499176e6918b3b4a29a34790848455e486929a4c52a WatchSource:0}: Error finding container 9b51a157f607dd237b9cd499176e6918b3b4a29a34790848455e486929a4c52a: Status 404 returned error can't find the container with id 9b51a157f607dd237b9cd499176e6918b3b4a29a34790848455e486929a4c52a Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.113829 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/68be5d27-605e-4f51-acaf-5e97915dd673-images\") pod \"machine-config-operator-74547568cd-xchvv\" (UID: \"68be5d27-605e-4f51-acaf-5e97915dd673\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xchvv" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.125408 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.145547 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.158654 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/68be5d27-605e-4f51-acaf-5e97915dd673-proxy-tls\") pod \"machine-config-operator-74547568cd-xchvv\" (UID: \"68be5d27-605e-4f51-acaf-5e97915dd673\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xchvv" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.185246 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.207162 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.225159 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.245778 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.266070 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.286250 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.303289 4822 request.go:700] Waited for 1.011640668s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&limit=500&resourceVersion=0 Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.305689 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.325353 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.344220 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.366717 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.385437 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.413467 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.424805 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.446160 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.465468 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.480128 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"22386e773c313f565068bd360b1f21f9b29462326b2c8c0ba1784f97ecefd119"} Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.480179 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"9b51a157f607dd237b9cd499176e6918b3b4a29a34790848455e486929a4c52a"} Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.482163 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-htbq4" event={"ID":"f51aff12-328f-4b79-8dbb-2079510f45dc","Type":"ContainerStarted","Data":"1c24122d0fd5a29521e558db85eb8f11607e09fb19148a11239768059e2ee910"} Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.482188 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-htbq4" event={"ID":"f51aff12-328f-4b79-8dbb-2079510f45dc","Type":"ContainerStarted","Data":"d1c1467768f38c175f7f24436181d75eb22c78b2b457735c2b164aa5b2a5a08c"} Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.483564 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ff2a473c19a10d118e04fed984f77ea1759aac4cb0be83746c7e15c2497ae58d"} Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.483586 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c12a1afe755a763898de256df13a45877c79c49ab065785d085c01da11dc4612"} Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.484855 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.485366 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"47ec55e02f29bb205d7264084f1434bff4b676bc28a9345ec97714c2c1ea634c"} Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.485389 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"bd0bfc308711375c448aaa5747829c3478578fec52111783688c368f411be04b"} Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.485648 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.505859 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.525119 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.545420 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.565458 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.585216 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.605586 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.625366 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.645336 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.664960 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.685075 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.705277 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.725411 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.745780 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.764971 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.784781 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.805427 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.826735 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.845337 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.865985 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.904992 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.909969 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldtms\" (UniqueName: \"kubernetes.io/projected/5f1841b8-2101-467d-9d66-f4a372088e0c-kube-api-access-ldtms\") pod \"authentication-operator-69f744f599-8sr4g\" (UID: \"5f1841b8-2101-467d-9d66-f4a372088e0c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8sr4g" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.925087 4822 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.945573 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 24 09:11:24 crc kubenswrapper[4822]: I0224 09:11:24.966188 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.015583 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52tsm\" (UniqueName: \"kubernetes.io/projected/751981f5-4bd9-42fd-888e-2407c6a197ca-kube-api-access-52tsm\") pod \"oauth-openshift-558db77b4-vct48\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.029952 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj7x7\" (UniqueName: \"kubernetes.io/projected/5430e137-6e26-43f5-bd31-7a2c83e9997c-kube-api-access-wj7x7\") pod \"openshift-apiserver-operator-796bbdcf4f-d4kcw\" (UID: \"5430e137-6e26-43f5-bd31-7a2c83e9997c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d4kcw" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.044653 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l84s\" (UniqueName: \"kubernetes.io/projected/8d4f7e76-5492-4aff-ac8d-ead257d6d9d6-kube-api-access-8l84s\") pod \"openshift-config-operator-7777fb866f-csmqv\" (UID: \"8d4f7e76-5492-4aff-ac8d-ead257d6d9d6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-csmqv" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.047062 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.055024 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-8sr4g" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.065849 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.086946 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.093895 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d4kcw" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.106721 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.125521 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.130674 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.146601 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.194832 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-csmqv" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.200811 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksfcv\" (UniqueName: \"kubernetes.io/projected/68d26dd2-7abb-404f-a8e5-869816830496-kube-api-access-ksfcv\") pod \"ingress-canary-4qd6h\" (UID: \"68d26dd2-7abb-404f-a8e5-869816830496\") " pod="openshift-ingress-canary/ingress-canary-4qd6h" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.221119 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69xlf\" (UniqueName: \"kubernetes.io/projected/e83cbf09-f579-43b8-b1f5-bf43c477d342-kube-api-access-69xlf\") pod \"machine-api-operator-5694c8668f-c6srg\" (UID: \"e83cbf09-f579-43b8-b1f5-bf43c477d342\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-c6srg" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.229438 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf249\" (UniqueName: \"kubernetes.io/projected/d79f3f30-0efc-4f81-86a8-8a348431af9e-kube-api-access-vf249\") pod \"route-controller-manager-6576b87f9c-2m726\" (UID: \"d79f3f30-0efc-4f81-86a8-8a348431af9e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2m726" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.234580 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4qd6h" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.248842 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h99ql\" (UniqueName: \"kubernetes.io/projected/0ece2924-64f7-4738-9049-4ac6fa8dcd75-kube-api-access-h99ql\") pod \"machine-approver-56656f9798-r6nlr\" (UID: \"0ece2924-64f7-4738-9049-4ac6fa8dcd75\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r6nlr" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.265538 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjbrm\" (UniqueName: \"kubernetes.io/projected/e1dc4f3f-07b4-40e4-b324-3bce1b98b132-kube-api-access-mjbrm\") pod \"apiserver-76f77b778f-vhssf\" (UID: \"e1dc4f3f-07b4-40e4-b324-3bce1b98b132\") " pod="openshift-apiserver/apiserver-76f77b778f-vhssf" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.267213 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-vhssf" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.289465 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.296861 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5tz7\" (UniqueName: \"kubernetes.io/projected/02782b47-e00a-4585-9f89-4fe9585931e5-kube-api-access-f5tz7\") pod \"controller-manager-879f6c89f-rw92h\" (UID: \"02782b47-e00a-4585-9f89-4fe9585931e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rw92h" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.302430 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8sr4g"] Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.303586 4822 request.go:700] Waited for 1.947103646s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-dockercfg-jwfmh&limit=500&resourceVersion=0 Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.305772 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.306066 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2m726" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.327085 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.341267 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-c6srg" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.344800 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.356498 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d4kcw"] Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.366798 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.381522 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r6nlr" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.390854 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.406695 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vct48"] Feb 24 09:11:25 crc kubenswrapper[4822]: W0224 09:11:25.408504 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ece2924_64f7_4738_9049_4ac6fa8dcd75.slice/crio-5e775a15f022cd3d8f0826f79b3c9d3f23747092a5f63c7885d80d65c34149cb WatchSource:0}: Error finding container 5e775a15f022cd3d8f0826f79b3c9d3f23747092a5f63c7885d80d65c34149cb: Status 404 returned error can't find the container with id 5e775a15f022cd3d8f0826f79b3c9d3f23747092a5f63c7885d80d65c34149cb Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.433675 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m42l\" (UniqueName: \"kubernetes.io/projected/159468c1-5e81-4bd2-8696-32a9c896b2ba-kube-api-access-5m42l\") pod \"cluster-image-registry-operator-dc59b4c8b-dqq5s\" (UID: \"159468c1-5e81-4bd2-8696-32a9c896b2ba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dqq5s" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.454479 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-csmqv"] Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.458996 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/159468c1-5e81-4bd2-8696-32a9c896b2ba-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-dqq5s\" (UID: \"159468c1-5e81-4bd2-8696-32a9c896b2ba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dqq5s" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.469290 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f691718-37cb-46da-aece-6173ba2ad129-bound-sa-token\") pod \"ingress-operator-5b745b69d9-rtmmj\" (UID: \"8f691718-37cb-46da-aece-6173ba2ad129\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rtmmj" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.483888 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsmnb\" (UniqueName: \"kubernetes.io/projected/8cd11009-44d6-4539-b702-958f388fc85e-kube-api-access-lsmnb\") pod \"downloads-7954f5f757-z895m\" (UID: \"8cd11009-44d6-4539-b702-958f388fc85e\") " pod="openshift-console/downloads-7954f5f757-z895m" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.509326 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckj2s\" (UniqueName: \"kubernetes.io/projected/8f691718-37cb-46da-aece-6173ba2ad129-kube-api-access-ckj2s\") pod \"ingress-operator-5b745b69d9-rtmmj\" (UID: \"8f691718-37cb-46da-aece-6173ba2ad129\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rtmmj" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.514524 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r6nlr" event={"ID":"0ece2924-64f7-4738-9049-4ac6fa8dcd75","Type":"ContainerStarted","Data":"5e775a15f022cd3d8f0826f79b3c9d3f23747092a5f63c7885d80d65c34149cb"} Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.519825 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb9x2\" (UniqueName: \"kubernetes.io/projected/24f77631-43b1-41e9-81ad-93134998b71e-kube-api-access-bb9x2\") pod \"kube-storage-version-migrator-operator-b67b599dd-4lh4k\" (UID: \"24f77631-43b1-41e9-81ad-93134998b71e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4lh4k" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.520321 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vct48" event={"ID":"751981f5-4bd9-42fd-888e-2407c6a197ca","Type":"ContainerStarted","Data":"ba51c796c73dbc7c6fca7b9fb40c8ec89f9af8338f562b5791b17e39287a027a"} Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.524013 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d4kcw" event={"ID":"5430e137-6e26-43f5-bd31-7a2c83e9997c","Type":"ContainerStarted","Data":"4d568306867411d2ce734124292d39898a8af6b3495d0db031fa2aeadd9d4363"} Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.527156 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-htbq4" event={"ID":"f51aff12-328f-4b79-8dbb-2079510f45dc","Type":"ContainerStarted","Data":"7a46b5044cb0d8c2fb4f8e34533eb32a99098de38365fb5e2b84cd3607ab9cc7"} Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.531283 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-8sr4g" event={"ID":"5f1841b8-2101-467d-9d66-f4a372088e0c","Type":"ContainerStarted","Data":"b3542fcbe020e300699cedcb000c0877f881b9741da18b70124f446718fa4106"} Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.532320 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-csmqv" event={"ID":"8d4f7e76-5492-4aff-ac8d-ead257d6d9d6","Type":"ContainerStarted","Data":"41dde5c9fd678d48df7835b77dba2c7bb98c9303301d37e0af3f09b871a0c46f"} Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.535263 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-z895m" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.539764 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9d179d56-8a1c-41b7-aa7d-6ef20b61c2e7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tqmq8\" (UID: \"9d179d56-8a1c-41b7-aa7d-6ef20b61c2e7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tqmq8" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.542484 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dqq5s" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.555654 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2m726"] Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.565846 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk7wb\" (UniqueName: \"kubernetes.io/projected/2b6b8486-799e-43df-beec-2c086d6411d1-kube-api-access-qk7wb\") pod \"etcd-operator-b45778765-dwqqh\" (UID: \"2b6b8486-799e-43df-beec-2c086d6411d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dwqqh" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.570783 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rtmmj" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.571599 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rw92h" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.579617 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tqmq8" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.589428 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h94mg\" (UniqueName: \"kubernetes.io/projected/68be5d27-605e-4f51-acaf-5e97915dd673-kube-api-access-h94mg\") pod \"machine-config-operator-74547568cd-xchvv\" (UID: \"68be5d27-605e-4f51-acaf-5e97915dd673\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xchvv" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.593163 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4lh4k" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.616820 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-vhssf"] Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.623525 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xchvv" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.637475 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4qd6h"] Feb 24 09:11:25 crc kubenswrapper[4822]: W0224 09:11:25.651613 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68d26dd2_7abb_404f_a8e5_869816830496.slice/crio-ab7a9bba944fe9ef9dd1d991dd487a8d059914c58b4abb93b76eaa8944e565a5 WatchSource:0}: Error finding container ab7a9bba944fe9ef9dd1d991dd487a8d059914c58b4abb93b76eaa8944e565a5: Status 404 returned error can't find the container with id ab7a9bba944fe9ef9dd1d991dd487a8d059914c58b4abb93b76eaa8944e565a5 Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.667137 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.667382 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9ca89b3-e69d-4443-9e13-10ec52c688e5-installation-pull-secrets\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.667409 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9d1a783f-2b4b-4a8a-b2d3-4e204b2f99c7-metrics-tls\") pod \"dns-operator-744455d44c-bczvj\" (UID: \"9d1a783f-2b4b-4a8a-b2d3-4e204b2f99c7\") " pod="openshift-dns-operator/dns-operator-744455d44c-bczvj" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.667451 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m69k8\" (UniqueName: \"kubernetes.io/projected/20c8ba77-3eff-4eb0-9b14-c7c44ef225d9-kube-api-access-m69k8\") pod \"cluster-samples-operator-665b6dd947-6pqs2\" (UID: \"20c8ba77-3eff-4eb0-9b14-c7c44ef225d9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pqs2" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.667473 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp24t\" (UniqueName: \"kubernetes.io/projected/9d1a783f-2b4b-4a8a-b2d3-4e204b2f99c7-kube-api-access-cp24t\") pod \"dns-operator-744455d44c-bczvj\" (UID: \"9d1a783f-2b4b-4a8a-b2d3-4e204b2f99c7\") " pod="openshift-dns-operator/dns-operator-744455d44c-bczvj" Feb 24 09:11:25 crc kubenswrapper[4822]: E0224 09:11:25.667508 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:26.167482569 +0000 UTC m=+208.555245117 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.667617 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-console-oauth-config\") pod \"console-f9d7485db-6shfw\" (UID: \"996cd1b1-5fb9-448d-bcb6-933ee6c0a31a\") " pod="openshift-console/console-f9d7485db-6shfw" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.669130 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2sq8\" (UniqueName: \"kubernetes.io/projected/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-kube-api-access-w2sq8\") pod \"console-f9d7485db-6shfw\" (UID: \"996cd1b1-5fb9-448d-bcb6-933ee6c0a31a\") " pod="openshift-console/console-f9d7485db-6shfw" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.669155 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c24d6b1-4978-4e08-acc6-e0193fead51a-serving-cert\") pod \"console-operator-58897d9998-fbzp9\" (UID: \"5c24d6b1-4978-4e08-acc6-e0193fead51a\") " pod="openshift-console-operator/console-operator-58897d9998-fbzp9" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.669279 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-console-config\") pod \"console-f9d7485db-6shfw\" (UID: \"996cd1b1-5fb9-448d-bcb6-933ee6c0a31a\") " pod="openshift-console/console-f9d7485db-6shfw" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.669315 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9ca89b3-e69d-4443-9e13-10ec52c688e5-trusted-ca\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.669329 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/20c8ba77-3eff-4eb0-9b14-c7c44ef225d9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6pqs2\" (UID: \"20c8ba77-3eff-4eb0-9b14-c7c44ef225d9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pqs2" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.669387 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d5a6bd57-4bb7-45b4-8451-27e28ee580a5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jr8gq\" (UID: \"d5a6bd57-4bb7-45b4-8451-27e28ee580a5\") " pod="openshift-marketplace/marketplace-operator-79b997595-jr8gq" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.669405 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-service-ca\") pod \"console-f9d7485db-6shfw\" (UID: \"996cd1b1-5fb9-448d-bcb6-933ee6c0a31a\") " pod="openshift-console/console-f9d7485db-6shfw" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.669484 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrj2m\" (UniqueName: \"kubernetes.io/projected/d5a6bd57-4bb7-45b4-8451-27e28ee580a5-kube-api-access-lrj2m\") pod \"marketplace-operator-79b997595-jr8gq\" (UID: \"d5a6bd57-4bb7-45b4-8451-27e28ee580a5\") " pod="openshift-marketplace/marketplace-operator-79b997595-jr8gq" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.669525 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-console-serving-cert\") pod \"console-f9d7485db-6shfw\" (UID: \"996cd1b1-5fb9-448d-bcb6-933ee6c0a31a\") " pod="openshift-console/console-f9d7485db-6shfw" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.669542 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c24d6b1-4978-4e08-acc6-e0193fead51a-config\") pod \"console-operator-58897d9998-fbzp9\" (UID: \"5c24d6b1-4978-4e08-acc6-e0193fead51a\") " pod="openshift-console-operator/console-operator-58897d9998-fbzp9" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.669609 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cp2l\" (UniqueName: \"kubernetes.io/projected/f9ca89b3-e69d-4443-9e13-10ec52c688e5-kube-api-access-2cp2l\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.669647 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d5a6bd57-4bb7-45b4-8451-27e28ee580a5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jr8gq\" (UID: \"d5a6bd57-4bb7-45b4-8451-27e28ee580a5\") " pod="openshift-marketplace/marketplace-operator-79b997595-jr8gq" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.669707 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l2jv\" (UniqueName: \"kubernetes.io/projected/5c24d6b1-4978-4e08-acc6-e0193fead51a-kube-api-access-9l2jv\") pod \"console-operator-58897d9998-fbzp9\" (UID: \"5c24d6b1-4978-4e08-acc6-e0193fead51a\") " pod="openshift-console-operator/console-operator-58897d9998-fbzp9" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.669728 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbc6b075-6968-4f08-9c46-f713a9a05672-config\") pod \"kube-apiserver-operator-766d6c64bb-z5zmb\" (UID: \"cbc6b075-6968-4f08-9c46-f713a9a05672\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5zmb" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.669771 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-trusted-ca-bundle\") pod \"console-f9d7485db-6shfw\" (UID: \"996cd1b1-5fb9-448d-bcb6-933ee6c0a31a\") " pod="openshift-console/console-f9d7485db-6shfw" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.669798 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-oauth-serving-cert\") pod \"console-f9d7485db-6shfw\" (UID: \"996cd1b1-5fb9-448d-bcb6-933ee6c0a31a\") " pod="openshift-console/console-f9d7485db-6shfw" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.669856 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6jbq\" (UniqueName: \"kubernetes.io/projected/6985f527-bfbe-45cc-84d4-cf56c4ec06fd-kube-api-access-l6jbq\") pod \"migrator-59844c95c7-wtvhk\" (UID: \"6985f527-bfbe-45cc-84d4-cf56c4ec06fd\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wtvhk" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.669882 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9ca89b3-e69d-4443-9e13-10ec52c688e5-bound-sa-token\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.669898 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4fa3a04c-00d8-43f5-9486-a03ea57167df-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-sb8fk\" (UID: \"4fa3a04c-00d8-43f5-9486-a03ea57167df\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sb8fk" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.670275 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9ca89b3-e69d-4443-9e13-10ec52c688e5-registry-tls\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.670324 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbc6b075-6968-4f08-9c46-f713a9a05672-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-z5zmb\" (UID: \"cbc6b075-6968-4f08-9c46-f713a9a05672\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5zmb" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.670342 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7261732-4026-4400-8d11-1bc189c8be83-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-vbcpz\" (UID: \"e7261732-4026-4400-8d11-1bc189c8be83\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vbcpz" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.670358 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbc6b075-6968-4f08-9c46-f713a9a05672-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-z5zmb\" (UID: \"cbc6b075-6968-4f08-9c46-f713a9a05672\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5zmb" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.670393 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9ca89b3-e69d-4443-9e13-10ec52c688e5-registry-certificates\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.670408 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c24d6b1-4978-4e08-acc6-e0193fead51a-trusted-ca\") pod \"console-operator-58897d9998-fbzp9\" (UID: \"5c24d6b1-4978-4e08-acc6-e0193fead51a\") " pod="openshift-console-operator/console-operator-58897d9998-fbzp9" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.670456 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fa3a04c-00d8-43f5-9486-a03ea57167df-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-sb8fk\" (UID: \"4fa3a04c-00d8-43f5-9486-a03ea57167df\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sb8fk" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.670470 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fa3a04c-00d8-43f5-9486-a03ea57167df-config\") pod \"kube-controller-manager-operator-78b949d7b-sb8fk\" (UID: \"4fa3a04c-00d8-43f5-9486-a03ea57167df\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sb8fk" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.670513 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7261732-4026-4400-8d11-1bc189c8be83-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-vbcpz\" (UID: \"e7261732-4026-4400-8d11-1bc189c8be83\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vbcpz" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.670527 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9ca89b3-e69d-4443-9e13-10ec52c688e5-ca-trust-extracted\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.670544 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8cqk\" (UniqueName: \"kubernetes.io/projected/e7261732-4026-4400-8d11-1bc189c8be83-kube-api-access-c8cqk\") pod \"openshift-controller-manager-operator-756b6f6bc6-vbcpz\" (UID: \"e7261732-4026-4400-8d11-1bc189c8be83\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vbcpz" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.771726 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:25 crc kubenswrapper[4822]: E0224 09:11:25.772115 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:26.272084007 +0000 UTC m=+208.659846555 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.772427 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l2jv\" (UniqueName: \"kubernetes.io/projected/5c24d6b1-4978-4e08-acc6-e0193fead51a-kube-api-access-9l2jv\") pod \"console-operator-58897d9998-fbzp9\" (UID: \"5c24d6b1-4978-4e08-acc6-e0193fead51a\") " pod="openshift-console-operator/console-operator-58897d9998-fbzp9" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.772461 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dlhr\" (UniqueName: \"kubernetes.io/projected/cf74c55b-847f-48f1-8b07-7885817ac0ce-kube-api-access-9dlhr\") pod \"service-ca-9c57cc56f-bm7pg\" (UID: \"cf74c55b-847f-48f1-8b07-7885817ac0ce\") " pod="openshift-service-ca/service-ca-9c57cc56f-bm7pg" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.772482 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt4j6\" (UniqueName: \"kubernetes.io/projected/a336a49d-01a4-421c-8168-54bf7283e5a6-kube-api-access-gt4j6\") pod \"machine-config-server-zqsnf\" (UID: \"a336a49d-01a4-421c-8168-54bf7283e5a6\") " pod="openshift-machine-config-operator/machine-config-server-zqsnf" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.772507 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbc6b075-6968-4f08-9c46-f713a9a05672-config\") pod \"kube-apiserver-operator-766d6c64bb-z5zmb\" (UID: \"cbc6b075-6968-4f08-9c46-f713a9a05672\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5zmb" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.772530 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/77e89f04-a4eb-4d42-a2df-3e4b266e7f85-encryption-config\") pod \"apiserver-7bbb656c7d-5tb67\" (UID: \"77e89f04-a4eb-4d42-a2df-3e4b266e7f85\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5tb67" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.772551 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-trusted-ca-bundle\") pod \"console-f9d7485db-6shfw\" (UID: \"996cd1b1-5fb9-448d-bcb6-933ee6c0a31a\") " pod="openshift-console/console-f9d7485db-6shfw" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.772572 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljx2p\" (UniqueName: \"kubernetes.io/projected/b2e26472-d7fa-4416-8b72-c41558ca9986-kube-api-access-ljx2p\") pod \"csi-hostpathplugin-54fq8\" (UID: \"b2e26472-d7fa-4416-8b72-c41558ca9986\") " pod="hostpath-provisioner/csi-hostpathplugin-54fq8" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.772618 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/cf74c55b-847f-48f1-8b07-7885817ac0ce-signing-cabundle\") pod \"service-ca-9c57cc56f-bm7pg\" (UID: \"cf74c55b-847f-48f1-8b07-7885817ac0ce\") " pod="openshift-service-ca/service-ca-9c57cc56f-bm7pg" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.772637 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96265d5d-2168-484b-9a0d-b0813a2defa1-config-volume\") pod \"dns-default-c9nn5\" (UID: \"96265d5d-2168-484b-9a0d-b0813a2defa1\") " pod="openshift-dns/dns-default-c9nn5" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.772679 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6ad08e0e-e2f4-43af-90e2-f449237c358b-profile-collector-cert\") pod \"catalog-operator-68c6474976-cdns9\" (UID: \"6ad08e0e-e2f4-43af-90e2-f449237c358b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cdns9" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.772701 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d142bb1-7f07-498b-a6d6-d378bb619c22-service-ca-bundle\") pod \"router-default-5444994796-5lckc\" (UID: \"0d142bb1-7f07-498b-a6d6-d378bb619c22\") " pod="openshift-ingress/router-default-5444994796-5lckc" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.772720 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkzkd\" (UniqueName: \"kubernetes.io/projected/6ad08e0e-e2f4-43af-90e2-f449237c358b-kube-api-access-pkzkd\") pod \"catalog-operator-68c6474976-cdns9\" (UID: \"6ad08e0e-e2f4-43af-90e2-f449237c358b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cdns9" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.772765 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3dce3994-c9ed-4dee-8614-c58312656132-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-czbsv\" (UID: \"3dce3994-c9ed-4dee-8614-c58312656132\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-czbsv" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.772787 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-oauth-serving-cert\") pod \"console-f9d7485db-6shfw\" (UID: \"996cd1b1-5fb9-448d-bcb6-933ee6c0a31a\") " pod="openshift-console/console-f9d7485db-6shfw" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.772819 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b2e26472-d7fa-4416-8b72-c41558ca9986-csi-data-dir\") pod \"csi-hostpathplugin-54fq8\" (UID: \"b2e26472-d7fa-4416-8b72-c41558ca9986\") " pod="hostpath-provisioner/csi-hostpathplugin-54fq8" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.772856 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/47ca7f6c-9012-4a3e-997b-5655cf70ce1a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-pjhzd\" (UID: \"47ca7f6c-9012-4a3e-997b-5655cf70ce1a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjhzd" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.772880 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6jbq\" (UniqueName: \"kubernetes.io/projected/6985f527-bfbe-45cc-84d4-cf56c4ec06fd-kube-api-access-l6jbq\") pod \"migrator-59844c95c7-wtvhk\" (UID: \"6985f527-bfbe-45cc-84d4-cf56c4ec06fd\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wtvhk" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.772948 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9ca89b3-e69d-4443-9e13-10ec52c688e5-bound-sa-token\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.772965 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4fa3a04c-00d8-43f5-9486-a03ea57167df-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-sb8fk\" (UID: \"4fa3a04c-00d8-43f5-9486-a03ea57167df\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sb8fk" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.773007 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9ca89b3-e69d-4443-9e13-10ec52c688e5-registry-tls\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.773022 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xttrk\" (UniqueName: \"kubernetes.io/projected/9a50eba2-73f2-4dcb-83a6-a1375a07be13-kube-api-access-xttrk\") pod \"collect-profiles-29532060-nsnkd\" (UID: \"9a50eba2-73f2-4dcb-83a6-a1375a07be13\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532060-nsnkd" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.773037 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77e89f04-a4eb-4d42-a2df-3e4b266e7f85-serving-cert\") pod \"apiserver-7bbb656c7d-5tb67\" (UID: \"77e89f04-a4eb-4d42-a2df-3e4b266e7f85\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5tb67" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.775442 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbc6b075-6968-4f08-9c46-f713a9a05672-config\") pod \"kube-apiserver-operator-766d6c64bb-z5zmb\" (UID: \"cbc6b075-6968-4f08-9c46-f713a9a05672\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5zmb" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.776473 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-oauth-serving-cert\") pod \"console-f9d7485db-6shfw\" (UID: \"996cd1b1-5fb9-448d-bcb6-933ee6c0a31a\") " pod="openshift-console/console-f9d7485db-6shfw" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.776815 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-trusted-ca-bundle\") pod \"console-f9d7485db-6shfw\" (UID: \"996cd1b1-5fb9-448d-bcb6-933ee6c0a31a\") " pod="openshift-console/console-f9d7485db-6shfw" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.780078 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbc6b075-6968-4f08-9c46-f713a9a05672-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-z5zmb\" (UID: \"cbc6b075-6968-4f08-9c46-f713a9a05672\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5zmb" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.780119 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b2e26472-d7fa-4416-8b72-c41558ca9986-mountpoint-dir\") pod \"csi-hostpathplugin-54fq8\" (UID: \"b2e26472-d7fa-4416-8b72-c41558ca9986\") " pod="hostpath-provisioner/csi-hostpathplugin-54fq8" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.780162 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7261732-4026-4400-8d11-1bc189c8be83-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-vbcpz\" (UID: \"e7261732-4026-4400-8d11-1bc189c8be83\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vbcpz" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.780189 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbc6b075-6968-4f08-9c46-f713a9a05672-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-z5zmb\" (UID: \"cbc6b075-6968-4f08-9c46-f713a9a05672\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5zmb" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.780217 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9ca89b3-e69d-4443-9e13-10ec52c688e5-registry-certificates\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.780239 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c24d6b1-4978-4e08-acc6-e0193fead51a-trusted-ca\") pod \"console-operator-58897d9998-fbzp9\" (UID: \"5c24d6b1-4978-4e08-acc6-e0193fead51a\") " pod="openshift-console-operator/console-operator-58897d9998-fbzp9" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.780261 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b2e26472-d7fa-4416-8b72-c41558ca9986-registration-dir\") pod \"csi-hostpathplugin-54fq8\" (UID: \"b2e26472-d7fa-4416-8b72-c41558ca9986\") " pod="hostpath-provisioner/csi-hostpathplugin-54fq8" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.780283 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5bad30a5-0f39-4e1f-a688-5b87f227062c-apiservice-cert\") pod \"packageserver-d55dfcdfc-m5nf7\" (UID: \"5bad30a5-0f39-4e1f-a688-5b87f227062c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5nf7" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.780303 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96265d5d-2168-484b-9a0d-b0813a2defa1-metrics-tls\") pod \"dns-default-c9nn5\" (UID: \"96265d5d-2168-484b-9a0d-b0813a2defa1\") " pod="openshift-dns/dns-default-c9nn5" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.780328 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5bad30a5-0f39-4e1f-a688-5b87f227062c-tmpfs\") pod \"packageserver-d55dfcdfc-m5nf7\" (UID: \"5bad30a5-0f39-4e1f-a688-5b87f227062c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5nf7" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.780380 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/cf74c55b-847f-48f1-8b07-7885817ac0ce-signing-key\") pod \"service-ca-9c57cc56f-bm7pg\" (UID: \"cf74c55b-847f-48f1-8b07-7885817ac0ce\") " pod="openshift-service-ca/service-ca-9c57cc56f-bm7pg" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.780404 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6ad08e0e-e2f4-43af-90e2-f449237c358b-srv-cert\") pod \"catalog-operator-68c6474976-cdns9\" (UID: \"6ad08e0e-e2f4-43af-90e2-f449237c358b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cdns9" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.780446 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a50eba2-73f2-4dcb-83a6-a1375a07be13-config-volume\") pod \"collect-profiles-29532060-nsnkd\" (UID: \"9a50eba2-73f2-4dcb-83a6-a1375a07be13\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532060-nsnkd" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.780495 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fa3a04c-00d8-43f5-9486-a03ea57167df-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-sb8fk\" (UID: \"4fa3a04c-00d8-43f5-9486-a03ea57167df\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sb8fk" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.780517 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fa3a04c-00d8-43f5-9486-a03ea57167df-config\") pod \"kube-controller-manager-operator-78b949d7b-sb8fk\" (UID: \"4fa3a04c-00d8-43f5-9486-a03ea57167df\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sb8fk" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.780548 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47ca7f6c-9012-4a3e-997b-5655cf70ce1a-srv-cert\") pod \"olm-operator-6b444d44fb-pjhzd\" (UID: \"47ca7f6c-9012-4a3e-997b-5655cf70ce1a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjhzd" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.780593 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jlj8\" (UniqueName: \"kubernetes.io/projected/0d142bb1-7f07-498b-a6d6-d378bb619c22-kube-api-access-8jlj8\") pod \"router-default-5444994796-5lckc\" (UID: \"0d142bb1-7f07-498b-a6d6-d378bb619c22\") " pod="openshift-ingress/router-default-5444994796-5lckc" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.782079 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c24d6b1-4978-4e08-acc6-e0193fead51a-trusted-ca\") pod \"console-operator-58897d9998-fbzp9\" (UID: \"5c24d6b1-4978-4e08-acc6-e0193fead51a\") " pod="openshift-console-operator/console-operator-58897d9998-fbzp9" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.782233 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9ca89b3-e69d-4443-9e13-10ec52c688e5-registry-certificates\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.783626 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7261732-4026-4400-8d11-1bc189c8be83-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-vbcpz\" (UID: \"e7261732-4026-4400-8d11-1bc189c8be83\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vbcpz" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.786357 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fa3a04c-00d8-43f5-9486-a03ea57167df-config\") pod \"kube-controller-manager-operator-78b949d7b-sb8fk\" (UID: \"4fa3a04c-00d8-43f5-9486-a03ea57167df\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sb8fk" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.787505 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/77e89f04-a4eb-4d42-a2df-3e4b266e7f85-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-5tb67\" (UID: \"77e89f04-a4eb-4d42-a2df-3e4b266e7f85\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5tb67" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.787563 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7261732-4026-4400-8d11-1bc189c8be83-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-vbcpz\" (UID: \"e7261732-4026-4400-8d11-1bc189c8be83\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vbcpz" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.787590 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3dce3994-c9ed-4dee-8614-c58312656132-proxy-tls\") pod \"machine-config-controller-84d6567774-czbsv\" (UID: \"3dce3994-c9ed-4dee-8614-c58312656132\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-czbsv" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.787612 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/77e89f04-a4eb-4d42-a2df-3e4b266e7f85-audit-policies\") pod \"apiserver-7bbb656c7d-5tb67\" (UID: \"77e89f04-a4eb-4d42-a2df-3e4b266e7f85\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5tb67" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.787638 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9ca89b3-e69d-4443-9e13-10ec52c688e5-ca-trust-extracted\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.787656 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q74lw\" (UniqueName: \"kubernetes.io/projected/cd8b0a0d-db6c-49d2-8b52-761482be3f06-kube-api-access-q74lw\") pod \"service-ca-operator-777779d784-4rnpp\" (UID: \"cd8b0a0d-db6c-49d2-8b52-761482be3f06\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4rnpp" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.787676 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8cqk\" (UniqueName: \"kubernetes.io/projected/e7261732-4026-4400-8d11-1bc189c8be83-kube-api-access-c8cqk\") pod \"openshift-controller-manager-operator-756b6f6bc6-vbcpz\" (UID: \"e7261732-4026-4400-8d11-1bc189c8be83\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vbcpz" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.787693 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6503db63-f800-40ec-bf51-12601462d7c7-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-mv4t2\" (UID: \"6503db63-f800-40ec-bf51-12601462d7c7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mv4t2" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.787718 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77e89f04-a4eb-4d42-a2df-3e4b266e7f85-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-5tb67\" (UID: \"77e89f04-a4eb-4d42-a2df-3e4b266e7f85\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5tb67" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.788562 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a336a49d-01a4-421c-8168-54bf7283e5a6-certs\") pod \"machine-config-server-zqsnf\" (UID: \"a336a49d-01a4-421c-8168-54bf7283e5a6\") " pod="openshift-machine-config-operator/machine-config-server-zqsnf" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.788607 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.788892 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9ca89b3-e69d-4443-9e13-10ec52c688e5-installation-pull-secrets\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.788986 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9d1a783f-2b4b-4a8a-b2d3-4e204b2f99c7-metrics-tls\") pod \"dns-operator-744455d44c-bczvj\" (UID: \"9d1a783f-2b4b-4a8a-b2d3-4e204b2f99c7\") " pod="openshift-dns-operator/dns-operator-744455d44c-bczvj" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.789012 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m69k8\" (UniqueName: \"kubernetes.io/projected/20c8ba77-3eff-4eb0-9b14-c7c44ef225d9-kube-api-access-m69k8\") pod \"cluster-samples-operator-665b6dd947-6pqs2\" (UID: \"20c8ba77-3eff-4eb0-9b14-c7c44ef225d9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pqs2" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.789028 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp24t\" (UniqueName: \"kubernetes.io/projected/9d1a783f-2b4b-4a8a-b2d3-4e204b2f99c7-kube-api-access-cp24t\") pod \"dns-operator-744455d44c-bczvj\" (UID: \"9d1a783f-2b4b-4a8a-b2d3-4e204b2f99c7\") " pod="openshift-dns-operator/dns-operator-744455d44c-bczvj" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.789062 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm92h\" (UniqueName: \"kubernetes.io/projected/96265d5d-2168-484b-9a0d-b0813a2defa1-kube-api-access-xm92h\") pod \"dns-default-c9nn5\" (UID: \"96265d5d-2168-484b-9a0d-b0813a2defa1\") " pod="openshift-dns/dns-default-c9nn5" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.789144 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9ca89b3-e69d-4443-9e13-10ec52c688e5-ca-trust-extracted\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:25 crc kubenswrapper[4822]: E0224 09:11:25.789335 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:26.289319986 +0000 UTC m=+208.677082534 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.790100 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9ca89b3-e69d-4443-9e13-10ec52c688e5-registry-tls\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.792367 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0d142bb1-7f07-498b-a6d6-d378bb619c22-stats-auth\") pod \"router-default-5444994796-5lckc\" (UID: \"0d142bb1-7f07-498b-a6d6-d378bb619c22\") " pod="openshift-ingress/router-default-5444994796-5lckc" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.792484 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnsvw\" (UniqueName: \"kubernetes.io/projected/3e4a2d28-bb5d-47fa-9d53-e2f32307cc3d-kube-api-access-lnsvw\") pod \"multus-admission-controller-857f4d67dd-5nmd4\" (UID: \"3e4a2d28-bb5d-47fa-9d53-e2f32307cc3d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5nmd4" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.792526 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-console-oauth-config\") pod \"console-f9d7485db-6shfw\" (UID: \"996cd1b1-5fb9-448d-bcb6-933ee6c0a31a\") " pod="openshift-console/console-f9d7485db-6shfw" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.792561 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd8b0a0d-db6c-49d2-8b52-761482be3f06-serving-cert\") pod \"service-ca-operator-777779d784-4rnpp\" (UID: \"cd8b0a0d-db6c-49d2-8b52-761482be3f06\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4rnpp" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.792619 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5js7\" (UniqueName: \"kubernetes.io/projected/77e89f04-a4eb-4d42-a2df-3e4b266e7f85-kube-api-access-m5js7\") pod \"apiserver-7bbb656c7d-5tb67\" (UID: \"77e89f04-a4eb-4d42-a2df-3e4b266e7f85\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5tb67" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.792685 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2sq8\" (UniqueName: \"kubernetes.io/projected/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-kube-api-access-w2sq8\") pod \"console-f9d7485db-6shfw\" (UID: \"996cd1b1-5fb9-448d-bcb6-933ee6c0a31a\") " pod="openshift-console/console-f9d7485db-6shfw" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.792711 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c24d6b1-4978-4e08-acc6-e0193fead51a-serving-cert\") pod \"console-operator-58897d9998-fbzp9\" (UID: \"5c24d6b1-4978-4e08-acc6-e0193fead51a\") " pod="openshift-console-operator/console-operator-58897d9998-fbzp9" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.792779 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d142bb1-7f07-498b-a6d6-d378bb619c22-metrics-certs\") pod \"router-default-5444994796-5lckc\" (UID: \"0d142bb1-7f07-498b-a6d6-d378bb619c22\") " pod="openshift-ingress/router-default-5444994796-5lckc" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.793528 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9ca89b3-e69d-4443-9e13-10ec52c688e5-installation-pull-secrets\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.793754 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rghns\" (UniqueName: \"kubernetes.io/projected/5bad30a5-0f39-4e1f-a688-5b87f227062c-kube-api-access-rghns\") pod \"packageserver-d55dfcdfc-m5nf7\" (UID: \"5bad30a5-0f39-4e1f-a688-5b87f227062c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5nf7" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.793800 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-console-config\") pod \"console-f9d7485db-6shfw\" (UID: \"996cd1b1-5fb9-448d-bcb6-933ee6c0a31a\") " pod="openshift-console/console-f9d7485db-6shfw" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.793823 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b2e26472-d7fa-4416-8b72-c41558ca9986-socket-dir\") pod \"csi-hostpathplugin-54fq8\" (UID: \"b2e26472-d7fa-4416-8b72-c41558ca9986\") " pod="hostpath-provisioner/csi-hostpathplugin-54fq8" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.793854 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9ca89b3-e69d-4443-9e13-10ec52c688e5-trusted-ca\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.793871 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/20c8ba77-3eff-4eb0-9b14-c7c44ef225d9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6pqs2\" (UID: \"20c8ba77-3eff-4eb0-9b14-c7c44ef225d9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pqs2" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.793887 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b2e26472-d7fa-4416-8b72-c41558ca9986-plugins-dir\") pod \"csi-hostpathplugin-54fq8\" (UID: \"b2e26472-d7fa-4416-8b72-c41558ca9986\") " pod="hostpath-provisioner/csi-hostpathplugin-54fq8" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.793951 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgnlr\" (UniqueName: \"kubernetes.io/projected/a47f5223-0799-4d25-be03-13fd76101150-kube-api-access-qgnlr\") pod \"control-plane-machine-set-operator-78cbb6b69f-4vkrp\" (UID: \"a47f5223-0799-4d25-be03-13fd76101150\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4vkrp" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.794192 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d5a6bd57-4bb7-45b4-8451-27e28ee580a5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jr8gq\" (UID: \"d5a6bd57-4bb7-45b4-8451-27e28ee580a5\") " pod="openshift-marketplace/marketplace-operator-79b997595-jr8gq" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.794285 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-service-ca\") pod \"console-f9d7485db-6shfw\" (UID: \"996cd1b1-5fb9-448d-bcb6-933ee6c0a31a\") " pod="openshift-console/console-f9d7485db-6shfw" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.794357 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp2gq\" (UniqueName: \"kubernetes.io/projected/3dce3994-c9ed-4dee-8614-c58312656132-kube-api-access-cp2gq\") pod \"machine-config-controller-84d6567774-czbsv\" (UID: \"3dce3994-c9ed-4dee-8614-c58312656132\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-czbsv" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.794425 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhphh\" (UniqueName: \"kubernetes.io/projected/6503db63-f800-40ec-bf51-12601462d7c7-kube-api-access-fhphh\") pod \"package-server-manager-789f6589d5-mv4t2\" (UID: \"6503db63-f800-40ec-bf51-12601462d7c7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mv4t2" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.794475 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0d142bb1-7f07-498b-a6d6-d378bb619c22-default-certificate\") pod \"router-default-5444994796-5lckc\" (UID: \"0d142bb1-7f07-498b-a6d6-d378bb619c22\") " pod="openshift-ingress/router-default-5444994796-5lckc" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.794506 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd8b0a0d-db6c-49d2-8b52-761482be3f06-config\") pod \"service-ca-operator-777779d784-4rnpp\" (UID: \"cd8b0a0d-db6c-49d2-8b52-761482be3f06\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4rnpp" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.794570 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrj2m\" (UniqueName: \"kubernetes.io/projected/d5a6bd57-4bb7-45b4-8451-27e28ee580a5-kube-api-access-lrj2m\") pod \"marketplace-operator-79b997595-jr8gq\" (UID: \"d5a6bd57-4bb7-45b4-8451-27e28ee580a5\") " pod="openshift-marketplace/marketplace-operator-79b997595-jr8gq" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.794616 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-console-serving-cert\") pod \"console-f9d7485db-6shfw\" (UID: \"996cd1b1-5fb9-448d-bcb6-933ee6c0a31a\") " pod="openshift-console/console-f9d7485db-6shfw" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.794665 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c24d6b1-4978-4e08-acc6-e0193fead51a-config\") pod \"console-operator-58897d9998-fbzp9\" (UID: \"5c24d6b1-4978-4e08-acc6-e0193fead51a\") " pod="openshift-console-operator/console-operator-58897d9998-fbzp9" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.794700 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3e4a2d28-bb5d-47fa-9d53-e2f32307cc3d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-5nmd4\" (UID: \"3e4a2d28-bb5d-47fa-9d53-e2f32307cc3d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5nmd4" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.794737 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/77e89f04-a4eb-4d42-a2df-3e4b266e7f85-audit-dir\") pod \"apiserver-7bbb656c7d-5tb67\" (UID: \"77e89f04-a4eb-4d42-a2df-3e4b266e7f85\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5tb67" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.794800 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cp2l\" (UniqueName: \"kubernetes.io/projected/f9ca89b3-e69d-4443-9e13-10ec52c688e5-kube-api-access-2cp2l\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.794838 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a336a49d-01a4-421c-8168-54bf7283e5a6-node-bootstrap-token\") pod \"machine-config-server-zqsnf\" (UID: \"a336a49d-01a4-421c-8168-54bf7283e5a6\") " pod="openshift-machine-config-operator/machine-config-server-zqsnf" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.794875 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/77e89f04-a4eb-4d42-a2df-3e4b266e7f85-etcd-client\") pod \"apiserver-7bbb656c7d-5tb67\" (UID: \"77e89f04-a4eb-4d42-a2df-3e4b266e7f85\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5tb67" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.795004 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx8tq\" (UniqueName: \"kubernetes.io/projected/47ca7f6c-9012-4a3e-997b-5655cf70ce1a-kube-api-access-nx8tq\") pod \"olm-operator-6b444d44fb-pjhzd\" (UID: \"47ca7f6c-9012-4a3e-997b-5655cf70ce1a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjhzd" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.795033 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-console-config\") pod \"console-f9d7485db-6shfw\" (UID: \"996cd1b1-5fb9-448d-bcb6-933ee6c0a31a\") " pod="openshift-console/console-f9d7485db-6shfw" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.795040 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9a50eba2-73f2-4dcb-83a6-a1375a07be13-secret-volume\") pod \"collect-profiles-29532060-nsnkd\" (UID: \"9a50eba2-73f2-4dcb-83a6-a1375a07be13\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532060-nsnkd" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.794285 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7261732-4026-4400-8d11-1bc189c8be83-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-vbcpz\" (UID: \"e7261732-4026-4400-8d11-1bc189c8be83\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vbcpz" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.794361 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbc6b075-6968-4f08-9c46-f713a9a05672-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-z5zmb\" (UID: \"cbc6b075-6968-4f08-9c46-f713a9a05672\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5zmb" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.795773 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9d1a783f-2b4b-4a8a-b2d3-4e204b2f99c7-metrics-tls\") pod \"dns-operator-744455d44c-bczvj\" (UID: \"9d1a783f-2b4b-4a8a-b2d3-4e204b2f99c7\") " pod="openshift-dns-operator/dns-operator-744455d44c-bczvj" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.795861 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d5a6bd57-4bb7-45b4-8451-27e28ee580a5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jr8gq\" (UID: \"d5a6bd57-4bb7-45b4-8451-27e28ee580a5\") " pod="openshift-marketplace/marketplace-operator-79b997595-jr8gq" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.795889 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5bad30a5-0f39-4e1f-a688-5b87f227062c-webhook-cert\") pod \"packageserver-d55dfcdfc-m5nf7\" (UID: \"5bad30a5-0f39-4e1f-a688-5b87f227062c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5nf7" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.795927 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a47f5223-0799-4d25-be03-13fd76101150-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-4vkrp\" (UID: \"a47f5223-0799-4d25-be03-13fd76101150\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4vkrp" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.796245 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c24d6b1-4978-4e08-acc6-e0193fead51a-config\") pod \"console-operator-58897d9998-fbzp9\" (UID: \"5c24d6b1-4978-4e08-acc6-e0193fead51a\") " pod="openshift-console-operator/console-operator-58897d9998-fbzp9" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.798710 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/20c8ba77-3eff-4eb0-9b14-c7c44ef225d9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6pqs2\" (UID: \"20c8ba77-3eff-4eb0-9b14-c7c44ef225d9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pqs2" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.801124 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fa3a04c-00d8-43f5-9486-a03ea57167df-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-sb8fk\" (UID: \"4fa3a04c-00d8-43f5-9486-a03ea57167df\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sb8fk" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.801533 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-service-ca\") pod \"console-f9d7485db-6shfw\" (UID: \"996cd1b1-5fb9-448d-bcb6-933ee6c0a31a\") " pod="openshift-console/console-f9d7485db-6shfw" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.803169 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d5a6bd57-4bb7-45b4-8451-27e28ee580a5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jr8gq\" (UID: \"d5a6bd57-4bb7-45b4-8451-27e28ee580a5\") " pod="openshift-marketplace/marketplace-operator-79b997595-jr8gq" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.804701 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c24d6b1-4978-4e08-acc6-e0193fead51a-serving-cert\") pod \"console-operator-58897d9998-fbzp9\" (UID: \"5c24d6b1-4978-4e08-acc6-e0193fead51a\") " pod="openshift-console-operator/console-operator-58897d9998-fbzp9" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.809841 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9ca89b3-e69d-4443-9e13-10ec52c688e5-trusted-ca\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.817302 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d5a6bd57-4bb7-45b4-8451-27e28ee580a5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jr8gq\" (UID: \"d5a6bd57-4bb7-45b4-8451-27e28ee580a5\") " pod="openshift-marketplace/marketplace-operator-79b997595-jr8gq" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.821127 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-console-serving-cert\") pod \"console-f9d7485db-6shfw\" (UID: \"996cd1b1-5fb9-448d-bcb6-933ee6c0a31a\") " pod="openshift-console/console-f9d7485db-6shfw" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.821242 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l2jv\" (UniqueName: \"kubernetes.io/projected/5c24d6b1-4978-4e08-acc6-e0193fead51a-kube-api-access-9l2jv\") pod \"console-operator-58897d9998-fbzp9\" (UID: \"5c24d6b1-4978-4e08-acc6-e0193fead51a\") " pod="openshift-console-operator/console-operator-58897d9998-fbzp9" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.830013 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-dwqqh" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.831601 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-console-oauth-config\") pod \"console-f9d7485db-6shfw\" (UID: \"996cd1b1-5fb9-448d-bcb6-933ee6c0a31a\") " pod="openshift-console/console-f9d7485db-6shfw" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.844708 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-c6srg"] Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.846293 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4fa3a04c-00d8-43f5-9486-a03ea57167df-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-sb8fk\" (UID: \"4fa3a04c-00d8-43f5-9486-a03ea57167df\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sb8fk" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.855217 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9ca89b3-e69d-4443-9e13-10ec52c688e5-bound-sa-token\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.866119 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6jbq\" (UniqueName: \"kubernetes.io/projected/6985f527-bfbe-45cc-84d4-cf56c4ec06fd-kube-api-access-l6jbq\") pod \"migrator-59844c95c7-wtvhk\" (UID: \"6985f527-bfbe-45cc-84d4-cf56c4ec06fd\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wtvhk" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.868496 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-z895m"] Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.885722 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sb8fk" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.885792 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbc6b075-6968-4f08-9c46-f713a9a05672-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-z5zmb\" (UID: \"cbc6b075-6968-4f08-9c46-f713a9a05672\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5zmb" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.896731 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.896939 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xttrk\" (UniqueName: \"kubernetes.io/projected/9a50eba2-73f2-4dcb-83a6-a1375a07be13-kube-api-access-xttrk\") pod \"collect-profiles-29532060-nsnkd\" (UID: \"9a50eba2-73f2-4dcb-83a6-a1375a07be13\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532060-nsnkd" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.896973 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77e89f04-a4eb-4d42-a2df-3e4b266e7f85-serving-cert\") pod \"apiserver-7bbb656c7d-5tb67\" (UID: \"77e89f04-a4eb-4d42-a2df-3e4b266e7f85\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5tb67" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897000 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b2e26472-d7fa-4416-8b72-c41558ca9986-mountpoint-dir\") pod \"csi-hostpathplugin-54fq8\" (UID: \"b2e26472-d7fa-4416-8b72-c41558ca9986\") " pod="hostpath-provisioner/csi-hostpathplugin-54fq8" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897025 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b2e26472-d7fa-4416-8b72-c41558ca9986-registration-dir\") pod \"csi-hostpathplugin-54fq8\" (UID: \"b2e26472-d7fa-4416-8b72-c41558ca9986\") " pod="hostpath-provisioner/csi-hostpathplugin-54fq8" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897047 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5bad30a5-0f39-4e1f-a688-5b87f227062c-apiservice-cert\") pod \"packageserver-d55dfcdfc-m5nf7\" (UID: \"5bad30a5-0f39-4e1f-a688-5b87f227062c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5nf7" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897068 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96265d5d-2168-484b-9a0d-b0813a2defa1-metrics-tls\") pod \"dns-default-c9nn5\" (UID: \"96265d5d-2168-484b-9a0d-b0813a2defa1\") " pod="openshift-dns/dns-default-c9nn5" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897089 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5bad30a5-0f39-4e1f-a688-5b87f227062c-tmpfs\") pod \"packageserver-d55dfcdfc-m5nf7\" (UID: \"5bad30a5-0f39-4e1f-a688-5b87f227062c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5nf7" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897112 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6ad08e0e-e2f4-43af-90e2-f449237c358b-srv-cert\") pod \"catalog-operator-68c6474976-cdns9\" (UID: \"6ad08e0e-e2f4-43af-90e2-f449237c358b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cdns9" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897133 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/cf74c55b-847f-48f1-8b07-7885817ac0ce-signing-key\") pod \"service-ca-9c57cc56f-bm7pg\" (UID: \"cf74c55b-847f-48f1-8b07-7885817ac0ce\") " pod="openshift-service-ca/service-ca-9c57cc56f-bm7pg" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897155 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a50eba2-73f2-4dcb-83a6-a1375a07be13-config-volume\") pod \"collect-profiles-29532060-nsnkd\" (UID: \"9a50eba2-73f2-4dcb-83a6-a1375a07be13\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532060-nsnkd" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897180 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47ca7f6c-9012-4a3e-997b-5655cf70ce1a-srv-cert\") pod \"olm-operator-6b444d44fb-pjhzd\" (UID: \"47ca7f6c-9012-4a3e-997b-5655cf70ce1a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjhzd" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897202 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jlj8\" (UniqueName: \"kubernetes.io/projected/0d142bb1-7f07-498b-a6d6-d378bb619c22-kube-api-access-8jlj8\") pod \"router-default-5444994796-5lckc\" (UID: \"0d142bb1-7f07-498b-a6d6-d378bb619c22\") " pod="openshift-ingress/router-default-5444994796-5lckc" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897224 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/77e89f04-a4eb-4d42-a2df-3e4b266e7f85-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-5tb67\" (UID: \"77e89f04-a4eb-4d42-a2df-3e4b266e7f85\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5tb67" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897245 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3dce3994-c9ed-4dee-8614-c58312656132-proxy-tls\") pod \"machine-config-controller-84d6567774-czbsv\" (UID: \"3dce3994-c9ed-4dee-8614-c58312656132\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-czbsv" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897267 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/77e89f04-a4eb-4d42-a2df-3e4b266e7f85-audit-policies\") pod \"apiserver-7bbb656c7d-5tb67\" (UID: \"77e89f04-a4eb-4d42-a2df-3e4b266e7f85\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5tb67" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897291 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q74lw\" (UniqueName: \"kubernetes.io/projected/cd8b0a0d-db6c-49d2-8b52-761482be3f06-kube-api-access-q74lw\") pod \"service-ca-operator-777779d784-4rnpp\" (UID: \"cd8b0a0d-db6c-49d2-8b52-761482be3f06\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4rnpp" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897318 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6503db63-f800-40ec-bf51-12601462d7c7-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-mv4t2\" (UID: \"6503db63-f800-40ec-bf51-12601462d7c7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mv4t2" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897339 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77e89f04-a4eb-4d42-a2df-3e4b266e7f85-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-5tb67\" (UID: \"77e89f04-a4eb-4d42-a2df-3e4b266e7f85\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5tb67" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897359 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a336a49d-01a4-421c-8168-54bf7283e5a6-certs\") pod \"machine-config-server-zqsnf\" (UID: \"a336a49d-01a4-421c-8168-54bf7283e5a6\") " pod="openshift-machine-config-operator/machine-config-server-zqsnf" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897402 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm92h\" (UniqueName: \"kubernetes.io/projected/96265d5d-2168-484b-9a0d-b0813a2defa1-kube-api-access-xm92h\") pod \"dns-default-c9nn5\" (UID: \"96265d5d-2168-484b-9a0d-b0813a2defa1\") " pod="openshift-dns/dns-default-c9nn5" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897423 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0d142bb1-7f07-498b-a6d6-d378bb619c22-stats-auth\") pod \"router-default-5444994796-5lckc\" (UID: \"0d142bb1-7f07-498b-a6d6-d378bb619c22\") " pod="openshift-ingress/router-default-5444994796-5lckc" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897445 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnsvw\" (UniqueName: \"kubernetes.io/projected/3e4a2d28-bb5d-47fa-9d53-e2f32307cc3d-kube-api-access-lnsvw\") pod \"multus-admission-controller-857f4d67dd-5nmd4\" (UID: \"3e4a2d28-bb5d-47fa-9d53-e2f32307cc3d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5nmd4" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897466 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd8b0a0d-db6c-49d2-8b52-761482be3f06-serving-cert\") pod \"service-ca-operator-777779d784-4rnpp\" (UID: \"cd8b0a0d-db6c-49d2-8b52-761482be3f06\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4rnpp" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897487 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5js7\" (UniqueName: \"kubernetes.io/projected/77e89f04-a4eb-4d42-a2df-3e4b266e7f85-kube-api-access-m5js7\") pod \"apiserver-7bbb656c7d-5tb67\" (UID: \"77e89f04-a4eb-4d42-a2df-3e4b266e7f85\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5tb67" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897506 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d142bb1-7f07-498b-a6d6-d378bb619c22-metrics-certs\") pod \"router-default-5444994796-5lckc\" (UID: \"0d142bb1-7f07-498b-a6d6-d378bb619c22\") " pod="openshift-ingress/router-default-5444994796-5lckc" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897536 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rghns\" (UniqueName: \"kubernetes.io/projected/5bad30a5-0f39-4e1f-a688-5b87f227062c-kube-api-access-rghns\") pod \"packageserver-d55dfcdfc-m5nf7\" (UID: \"5bad30a5-0f39-4e1f-a688-5b87f227062c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5nf7" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897555 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b2e26472-d7fa-4416-8b72-c41558ca9986-socket-dir\") pod \"csi-hostpathplugin-54fq8\" (UID: \"b2e26472-d7fa-4416-8b72-c41558ca9986\") " pod="hostpath-provisioner/csi-hostpathplugin-54fq8" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897579 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b2e26472-d7fa-4416-8b72-c41558ca9986-plugins-dir\") pod \"csi-hostpathplugin-54fq8\" (UID: \"b2e26472-d7fa-4416-8b72-c41558ca9986\") " pod="hostpath-provisioner/csi-hostpathplugin-54fq8" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897604 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgnlr\" (UniqueName: \"kubernetes.io/projected/a47f5223-0799-4d25-be03-13fd76101150-kube-api-access-qgnlr\") pod \"control-plane-machine-set-operator-78cbb6b69f-4vkrp\" (UID: \"a47f5223-0799-4d25-be03-13fd76101150\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4vkrp" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897628 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp2gq\" (UniqueName: \"kubernetes.io/projected/3dce3994-c9ed-4dee-8614-c58312656132-kube-api-access-cp2gq\") pod \"machine-config-controller-84d6567774-czbsv\" (UID: \"3dce3994-c9ed-4dee-8614-c58312656132\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-czbsv" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897649 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhphh\" (UniqueName: \"kubernetes.io/projected/6503db63-f800-40ec-bf51-12601462d7c7-kube-api-access-fhphh\") pod \"package-server-manager-789f6589d5-mv4t2\" (UID: \"6503db63-f800-40ec-bf51-12601462d7c7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mv4t2" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897669 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0d142bb1-7f07-498b-a6d6-d378bb619c22-default-certificate\") pod \"router-default-5444994796-5lckc\" (UID: \"0d142bb1-7f07-498b-a6d6-d378bb619c22\") " pod="openshift-ingress/router-default-5444994796-5lckc" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897691 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd8b0a0d-db6c-49d2-8b52-761482be3f06-config\") pod \"service-ca-operator-777779d784-4rnpp\" (UID: \"cd8b0a0d-db6c-49d2-8b52-761482be3f06\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4rnpp" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897724 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3e4a2d28-bb5d-47fa-9d53-e2f32307cc3d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-5nmd4\" (UID: \"3e4a2d28-bb5d-47fa-9d53-e2f32307cc3d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5nmd4" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897744 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/77e89f04-a4eb-4d42-a2df-3e4b266e7f85-audit-dir\") pod \"apiserver-7bbb656c7d-5tb67\" (UID: \"77e89f04-a4eb-4d42-a2df-3e4b266e7f85\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5tb67" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897760 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/77e89f04-a4eb-4d42-a2df-3e4b266e7f85-etcd-client\") pod \"apiserver-7bbb656c7d-5tb67\" (UID: \"77e89f04-a4eb-4d42-a2df-3e4b266e7f85\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5tb67" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897781 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a336a49d-01a4-421c-8168-54bf7283e5a6-node-bootstrap-token\") pod \"machine-config-server-zqsnf\" (UID: \"a336a49d-01a4-421c-8168-54bf7283e5a6\") " pod="openshift-machine-config-operator/machine-config-server-zqsnf" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897799 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx8tq\" (UniqueName: \"kubernetes.io/projected/47ca7f6c-9012-4a3e-997b-5655cf70ce1a-kube-api-access-nx8tq\") pod \"olm-operator-6b444d44fb-pjhzd\" (UID: \"47ca7f6c-9012-4a3e-997b-5655cf70ce1a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjhzd" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897814 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9a50eba2-73f2-4dcb-83a6-a1375a07be13-secret-volume\") pod \"collect-profiles-29532060-nsnkd\" (UID: \"9a50eba2-73f2-4dcb-83a6-a1375a07be13\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532060-nsnkd" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897831 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5bad30a5-0f39-4e1f-a688-5b87f227062c-webhook-cert\") pod \"packageserver-d55dfcdfc-m5nf7\" (UID: \"5bad30a5-0f39-4e1f-a688-5b87f227062c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5nf7" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897847 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a47f5223-0799-4d25-be03-13fd76101150-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-4vkrp\" (UID: \"a47f5223-0799-4d25-be03-13fd76101150\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4vkrp" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897869 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dlhr\" (UniqueName: \"kubernetes.io/projected/cf74c55b-847f-48f1-8b07-7885817ac0ce-kube-api-access-9dlhr\") pod \"service-ca-9c57cc56f-bm7pg\" (UID: \"cf74c55b-847f-48f1-8b07-7885817ac0ce\") " pod="openshift-service-ca/service-ca-9c57cc56f-bm7pg" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897892 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt4j6\" (UniqueName: \"kubernetes.io/projected/a336a49d-01a4-421c-8168-54bf7283e5a6-kube-api-access-gt4j6\") pod \"machine-config-server-zqsnf\" (UID: \"a336a49d-01a4-421c-8168-54bf7283e5a6\") " pod="openshift-machine-config-operator/machine-config-server-zqsnf" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897966 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/77e89f04-a4eb-4d42-a2df-3e4b266e7f85-encryption-config\") pod \"apiserver-7bbb656c7d-5tb67\" (UID: \"77e89f04-a4eb-4d42-a2df-3e4b266e7f85\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5tb67" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897983 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljx2p\" (UniqueName: \"kubernetes.io/projected/b2e26472-d7fa-4416-8b72-c41558ca9986-kube-api-access-ljx2p\") pod \"csi-hostpathplugin-54fq8\" (UID: \"b2e26472-d7fa-4416-8b72-c41558ca9986\") " pod="hostpath-provisioner/csi-hostpathplugin-54fq8" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.897998 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96265d5d-2168-484b-9a0d-b0813a2defa1-config-volume\") pod \"dns-default-c9nn5\" (UID: \"96265d5d-2168-484b-9a0d-b0813a2defa1\") " pod="openshift-dns/dns-default-c9nn5" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.898014 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/cf74c55b-847f-48f1-8b07-7885817ac0ce-signing-cabundle\") pod \"service-ca-9c57cc56f-bm7pg\" (UID: \"cf74c55b-847f-48f1-8b07-7885817ac0ce\") " pod="openshift-service-ca/service-ca-9c57cc56f-bm7pg" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.898031 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6ad08e0e-e2f4-43af-90e2-f449237c358b-profile-collector-cert\") pod \"catalog-operator-68c6474976-cdns9\" (UID: \"6ad08e0e-e2f4-43af-90e2-f449237c358b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cdns9" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.898066 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d142bb1-7f07-498b-a6d6-d378bb619c22-service-ca-bundle\") pod \"router-default-5444994796-5lckc\" (UID: \"0d142bb1-7f07-498b-a6d6-d378bb619c22\") " pod="openshift-ingress/router-default-5444994796-5lckc" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.898083 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkzkd\" (UniqueName: \"kubernetes.io/projected/6ad08e0e-e2f4-43af-90e2-f449237c358b-kube-api-access-pkzkd\") pod \"catalog-operator-68c6474976-cdns9\" (UID: \"6ad08e0e-e2f4-43af-90e2-f449237c358b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cdns9" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.898107 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3dce3994-c9ed-4dee-8614-c58312656132-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-czbsv\" (UID: \"3dce3994-c9ed-4dee-8614-c58312656132\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-czbsv" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.898129 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b2e26472-d7fa-4416-8b72-c41558ca9986-csi-data-dir\") pod \"csi-hostpathplugin-54fq8\" (UID: \"b2e26472-d7fa-4416-8b72-c41558ca9986\") " pod="hostpath-provisioner/csi-hostpathplugin-54fq8" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.898152 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/47ca7f6c-9012-4a3e-997b-5655cf70ce1a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-pjhzd\" (UID: \"47ca7f6c-9012-4a3e-997b-5655cf70ce1a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjhzd" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.899498 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b2e26472-d7fa-4416-8b72-c41558ca9986-registration-dir\") pod \"csi-hostpathplugin-54fq8\" (UID: \"b2e26472-d7fa-4416-8b72-c41558ca9986\") " pod="hostpath-provisioner/csi-hostpathplugin-54fq8" Feb 24 09:11:25 crc kubenswrapper[4822]: E0224 09:11:25.899586 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:26.39956898 +0000 UTC m=+208.787331528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.900771 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a50eba2-73f2-4dcb-83a6-a1375a07be13-config-volume\") pod \"collect-profiles-29532060-nsnkd\" (UID: \"9a50eba2-73f2-4dcb-83a6-a1375a07be13\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532060-nsnkd" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.900942 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-rtmmj"] Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.903735 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b2e26472-d7fa-4416-8b72-c41558ca9986-mountpoint-dir\") pod \"csi-hostpathplugin-54fq8\" (UID: \"b2e26472-d7fa-4416-8b72-c41558ca9986\") " pod="hostpath-provisioner/csi-hostpathplugin-54fq8" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.903776 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/cf74c55b-847f-48f1-8b07-7885817ac0ce-signing-key\") pod \"service-ca-9c57cc56f-bm7pg\" (UID: \"cf74c55b-847f-48f1-8b07-7885817ac0ce\") " pod="openshift-service-ca/service-ca-9c57cc56f-bm7pg" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.904307 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd8b0a0d-db6c-49d2-8b52-761482be3f06-config\") pod \"service-ca-operator-777779d784-4rnpp\" (UID: \"cd8b0a0d-db6c-49d2-8b52-761482be3f06\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4rnpp" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.905172 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77e89f04-a4eb-4d42-a2df-3e4b266e7f85-serving-cert\") pod \"apiserver-7bbb656c7d-5tb67\" (UID: \"77e89f04-a4eb-4d42-a2df-3e4b266e7f85\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5tb67" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.905724 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/77e89f04-a4eb-4d42-a2df-3e4b266e7f85-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-5tb67\" (UID: \"77e89f04-a4eb-4d42-a2df-3e4b266e7f85\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5tb67" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.908440 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3e4a2d28-bb5d-47fa-9d53-e2f32307cc3d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-5nmd4\" (UID: \"3e4a2d28-bb5d-47fa-9d53-e2f32307cc3d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5nmd4" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.908997 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5bad30a5-0f39-4e1f-a688-5b87f227062c-apiservice-cert\") pod \"packageserver-d55dfcdfc-m5nf7\" (UID: \"5bad30a5-0f39-4e1f-a688-5b87f227062c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5nf7" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.909388 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/77e89f04-a4eb-4d42-a2df-3e4b266e7f85-audit-policies\") pod \"apiserver-7bbb656c7d-5tb67\" (UID: \"77e89f04-a4eb-4d42-a2df-3e4b266e7f85\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5tb67" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.910052 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96265d5d-2168-484b-9a0d-b0813a2defa1-config-volume\") pod \"dns-default-c9nn5\" (UID: \"96265d5d-2168-484b-9a0d-b0813a2defa1\") " pod="openshift-dns/dns-default-c9nn5" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.910163 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b2e26472-d7fa-4416-8b72-c41558ca9986-csi-data-dir\") pod \"csi-hostpathplugin-54fq8\" (UID: \"b2e26472-d7fa-4416-8b72-c41558ca9986\") " pod="hostpath-provisioner/csi-hostpathplugin-54fq8" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.910512 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5zmb" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.911368 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d142bb1-7f07-498b-a6d6-d378bb619c22-service-ca-bundle\") pod \"router-default-5444994796-5lckc\" (UID: \"0d142bb1-7f07-498b-a6d6-d378bb619c22\") " pod="openshift-ingress/router-default-5444994796-5lckc" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.911651 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0d142bb1-7f07-498b-a6d6-d378bb619c22-stats-auth\") pod \"router-default-5444994796-5lckc\" (UID: \"0d142bb1-7f07-498b-a6d6-d378bb619c22\") " pod="openshift-ingress/router-default-5444994796-5lckc" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.911955 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/cf74c55b-847f-48f1-8b07-7885817ac0ce-signing-cabundle\") pod \"service-ca-9c57cc56f-bm7pg\" (UID: \"cf74c55b-847f-48f1-8b07-7885817ac0ce\") " pod="openshift-service-ca/service-ca-9c57cc56f-bm7pg" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.912144 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6ad08e0e-e2f4-43af-90e2-f449237c358b-profile-collector-cert\") pod \"catalog-operator-68c6474976-cdns9\" (UID: \"6ad08e0e-e2f4-43af-90e2-f449237c358b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cdns9" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.912258 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b2e26472-d7fa-4416-8b72-c41558ca9986-plugins-dir\") pod \"csi-hostpathplugin-54fq8\" (UID: \"b2e26472-d7fa-4416-8b72-c41558ca9986\") " pod="hostpath-provisioner/csi-hostpathplugin-54fq8" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.913242 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/47ca7f6c-9012-4a3e-997b-5655cf70ce1a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-pjhzd\" (UID: \"47ca7f6c-9012-4a3e-997b-5655cf70ce1a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjhzd" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.913286 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/77e89f04-a4eb-4d42-a2df-3e4b266e7f85-audit-dir\") pod \"apiserver-7bbb656c7d-5tb67\" (UID: \"77e89f04-a4eb-4d42-a2df-3e4b266e7f85\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5tb67" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.915278 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b2e26472-d7fa-4416-8b72-c41558ca9986-socket-dir\") pod \"csi-hostpathplugin-54fq8\" (UID: \"b2e26472-d7fa-4416-8b72-c41558ca9986\") " pod="hostpath-provisioner/csi-hostpathplugin-54fq8" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.916232 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wtvhk" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.918249 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5bad30a5-0f39-4e1f-a688-5b87f227062c-tmpfs\") pod \"packageserver-d55dfcdfc-m5nf7\" (UID: \"5bad30a5-0f39-4e1f-a688-5b87f227062c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5nf7" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.919048 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/77e89f04-a4eb-4d42-a2df-3e4b266e7f85-encryption-config\") pod \"apiserver-7bbb656c7d-5tb67\" (UID: \"77e89f04-a4eb-4d42-a2df-3e4b266e7f85\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5tb67" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.919091 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3dce3994-c9ed-4dee-8614-c58312656132-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-czbsv\" (UID: \"3dce3994-c9ed-4dee-8614-c58312656132\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-czbsv" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.919158 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96265d5d-2168-484b-9a0d-b0813a2defa1-metrics-tls\") pod \"dns-default-c9nn5\" (UID: \"96265d5d-2168-484b-9a0d-b0813a2defa1\") " pod="openshift-dns/dns-default-c9nn5" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.920856 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6503db63-f800-40ec-bf51-12601462d7c7-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-mv4t2\" (UID: \"6503db63-f800-40ec-bf51-12601462d7c7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mv4t2" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.923240 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77e89f04-a4eb-4d42-a2df-3e4b266e7f85-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-5tb67\" (UID: \"77e89f04-a4eb-4d42-a2df-3e4b266e7f85\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5tb67" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.923724 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a336a49d-01a4-421c-8168-54bf7283e5a6-certs\") pod \"machine-config-server-zqsnf\" (UID: \"a336a49d-01a4-421c-8168-54bf7283e5a6\") " pod="openshift-machine-config-operator/machine-config-server-zqsnf" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.924640 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/77e89f04-a4eb-4d42-a2df-3e4b266e7f85-etcd-client\") pod \"apiserver-7bbb656c7d-5tb67\" (UID: \"77e89f04-a4eb-4d42-a2df-3e4b266e7f85\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5tb67" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.924681 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6ad08e0e-e2f4-43af-90e2-f449237c358b-srv-cert\") pod \"catalog-operator-68c6474976-cdns9\" (UID: \"6ad08e0e-e2f4-43af-90e2-f449237c358b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cdns9" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.924681 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a336a49d-01a4-421c-8168-54bf7283e5a6-node-bootstrap-token\") pod \"machine-config-server-zqsnf\" (UID: \"a336a49d-01a4-421c-8168-54bf7283e5a6\") " pod="openshift-machine-config-operator/machine-config-server-zqsnf" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.929617 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8cqk\" (UniqueName: \"kubernetes.io/projected/e7261732-4026-4400-8d11-1bc189c8be83-kube-api-access-c8cqk\") pod \"openshift-controller-manager-operator-756b6f6bc6-vbcpz\" (UID: \"e7261732-4026-4400-8d11-1bc189c8be83\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vbcpz" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.930787 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47ca7f6c-9012-4a3e-997b-5655cf70ce1a-srv-cert\") pod \"olm-operator-6b444d44fb-pjhzd\" (UID: \"47ca7f6c-9012-4a3e-997b-5655cf70ce1a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjhzd" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.931177 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3dce3994-c9ed-4dee-8614-c58312656132-proxy-tls\") pod \"machine-config-controller-84d6567774-czbsv\" (UID: \"3dce3994-c9ed-4dee-8614-c58312656132\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-czbsv" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.931524 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d142bb1-7f07-498b-a6d6-d378bb619c22-metrics-certs\") pod \"router-default-5444994796-5lckc\" (UID: \"0d142bb1-7f07-498b-a6d6-d378bb619c22\") " pod="openshift-ingress/router-default-5444994796-5lckc" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.932259 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5bad30a5-0f39-4e1f-a688-5b87f227062c-webhook-cert\") pod \"packageserver-d55dfcdfc-m5nf7\" (UID: \"5bad30a5-0f39-4e1f-a688-5b87f227062c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5nf7" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.933169 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd8b0a0d-db6c-49d2-8b52-761482be3f06-serving-cert\") pod \"service-ca-operator-777779d784-4rnpp\" (UID: \"cd8b0a0d-db6c-49d2-8b52-761482be3f06\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4rnpp" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.933684 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9a50eba2-73f2-4dcb-83a6-a1375a07be13-secret-volume\") pod \"collect-profiles-29532060-nsnkd\" (UID: \"9a50eba2-73f2-4dcb-83a6-a1375a07be13\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532060-nsnkd" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.934437 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0d142bb1-7f07-498b-a6d6-d378bb619c22-default-certificate\") pod \"router-default-5444994796-5lckc\" (UID: \"0d142bb1-7f07-498b-a6d6-d378bb619c22\") " pod="openshift-ingress/router-default-5444994796-5lckc" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.938846 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a47f5223-0799-4d25-be03-13fd76101150-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-4vkrp\" (UID: \"a47f5223-0799-4d25-be03-13fd76101150\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4vkrp" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.939555 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m69k8\" (UniqueName: \"kubernetes.io/projected/20c8ba77-3eff-4eb0-9b14-c7c44ef225d9-kube-api-access-m69k8\") pod \"cluster-samples-operator-665b6dd947-6pqs2\" (UID: \"20c8ba77-3eff-4eb0-9b14-c7c44ef225d9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pqs2" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.962760 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp24t\" (UniqueName: \"kubernetes.io/projected/9d1a783f-2b4b-4a8a-b2d3-4e204b2f99c7-kube-api-access-cp24t\") pod \"dns-operator-744455d44c-bczvj\" (UID: \"9d1a783f-2b4b-4a8a-b2d3-4e204b2f99c7\") " pod="openshift-dns-operator/dns-operator-744455d44c-bczvj" Feb 24 09:11:25 crc kubenswrapper[4822]: I0224 09:11:25.986965 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2sq8\" (UniqueName: \"kubernetes.io/projected/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-kube-api-access-w2sq8\") pod \"console-f9d7485db-6shfw\" (UID: \"996cd1b1-5fb9-448d-bcb6-933ee6c0a31a\") " pod="openshift-console/console-f9d7485db-6shfw" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.003021 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrj2m\" (UniqueName: \"kubernetes.io/projected/d5a6bd57-4bb7-45b4-8451-27e28ee580a5-kube-api-access-lrj2m\") pod \"marketplace-operator-79b997595-jr8gq\" (UID: \"d5a6bd57-4bb7-45b4-8451-27e28ee580a5\") " pod="openshift-marketplace/marketplace-operator-79b997595-jr8gq" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.004020 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:26 crc kubenswrapper[4822]: E0224 09:11:26.004337 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:26.504321713 +0000 UTC m=+208.892084261 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.021802 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cp2l\" (UniqueName: \"kubernetes.io/projected/f9ca89b3-e69d-4443-9e13-10ec52c688e5-kube-api-access-2cp2l\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.034562 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rw92h"] Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.065478 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xttrk\" (UniqueName: \"kubernetes.io/projected/9a50eba2-73f2-4dcb-83a6-a1375a07be13-kube-api-access-xttrk\") pod \"collect-profiles-29532060-nsnkd\" (UID: \"9a50eba2-73f2-4dcb-83a6-a1375a07be13\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532060-nsnkd" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.083180 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jlj8\" (UniqueName: \"kubernetes.io/projected/0d142bb1-7f07-498b-a6d6-d378bb619c22-kube-api-access-8jlj8\") pod \"router-default-5444994796-5lckc\" (UID: \"0d142bb1-7f07-498b-a6d6-d378bb619c22\") " pod="openshift-ingress/router-default-5444994796-5lckc" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.099811 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljx2p\" (UniqueName: \"kubernetes.io/projected/b2e26472-d7fa-4416-8b72-c41558ca9986-kube-api-access-ljx2p\") pod \"csi-hostpathplugin-54fq8\" (UID: \"b2e26472-d7fa-4416-8b72-c41558ca9986\") " pod="hostpath-provisioner/csi-hostpathplugin-54fq8" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.106642 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-fbzp9" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.106873 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:26 crc kubenswrapper[4822]: E0224 09:11:26.107279 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:26.607260741 +0000 UTC m=+208.995023289 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.114460 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pqs2" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.118274 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkzkd\" (UniqueName: \"kubernetes.io/projected/6ad08e0e-e2f4-43af-90e2-f449237c358b-kube-api-access-pkzkd\") pod \"catalog-operator-68c6474976-cdns9\" (UID: \"6ad08e0e-e2f4-43af-90e2-f449237c358b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cdns9" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.121846 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6shfw" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.144451 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rghns\" (UniqueName: \"kubernetes.io/projected/5bad30a5-0f39-4e1f-a688-5b87f227062c-kube-api-access-rghns\") pod \"packageserver-d55dfcdfc-m5nf7\" (UID: \"5bad30a5-0f39-4e1f-a688-5b87f227062c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5nf7" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.148841 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vbcpz" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.156894 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-bczvj" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.167514 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhphh\" (UniqueName: \"kubernetes.io/projected/6503db63-f800-40ec-bf51-12601462d7c7-kube-api-access-fhphh\") pod \"package-server-manager-789f6589d5-mv4t2\" (UID: \"6503db63-f800-40ec-bf51-12601462d7c7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mv4t2" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.170560 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4lh4k"] Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.171733 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-dwqqh"] Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.173588 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dqq5s"] Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.182311 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tqmq8"] Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.185114 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgnlr\" (UniqueName: \"kubernetes.io/projected/a47f5223-0799-4d25-be03-13fd76101150-kube-api-access-qgnlr\") pod \"control-plane-machine-set-operator-78cbb6b69f-4vkrp\" (UID: \"a47f5223-0799-4d25-be03-13fd76101150\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4vkrp" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.202743 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wtvhk"] Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.203236 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sb8fk"] Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.207809 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp2gq\" (UniqueName: \"kubernetes.io/projected/3dce3994-c9ed-4dee-8614-c58312656132-kube-api-access-cp2gq\") pod \"machine-config-controller-84d6567774-czbsv\" (UID: \"3dce3994-c9ed-4dee-8614-c58312656132\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-czbsv" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.211407 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jr8gq" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.211411 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:26 crc kubenswrapper[4822]: E0224 09:11:26.211683 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:26.711668134 +0000 UTC m=+209.099430682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.222248 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dlhr\" (UniqueName: \"kubernetes.io/projected/cf74c55b-847f-48f1-8b07-7885817ac0ce-kube-api-access-9dlhr\") pod \"service-ca-9c57cc56f-bm7pg\" (UID: \"cf74c55b-847f-48f1-8b07-7885817ac0ce\") " pod="openshift-service-ca/service-ca-9c57cc56f-bm7pg" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.231997 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-xchvv"] Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.233623 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5zmb"] Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.237638 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4vkrp" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.256136 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5lckc" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.264144 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-czbsv" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.270638 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt4j6\" (UniqueName: \"kubernetes.io/projected/a336a49d-01a4-421c-8168-54bf7283e5a6-kube-api-access-gt4j6\") pod \"machine-config-server-zqsnf\" (UID: \"a336a49d-01a4-421c-8168-54bf7283e5a6\") " pod="openshift-machine-config-operator/machine-config-server-zqsnf" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.273938 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cdns9" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.276159 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q74lw\" (UniqueName: \"kubernetes.io/projected/cd8b0a0d-db6c-49d2-8b52-761482be3f06-kube-api-access-q74lw\") pod \"service-ca-operator-777779d784-4rnpp\" (UID: \"cd8b0a0d-db6c-49d2-8b52-761482be3f06\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4rnpp" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.289625 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnsvw\" (UniqueName: \"kubernetes.io/projected/3e4a2d28-bb5d-47fa-9d53-e2f32307cc3d-kube-api-access-lnsvw\") pod \"multus-admission-controller-857f4d67dd-5nmd4\" (UID: \"3e4a2d28-bb5d-47fa-9d53-e2f32307cc3d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5nmd4" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.291858 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532060-nsnkd" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.301871 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4rnpp" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.314684 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:26 crc kubenswrapper[4822]: E0224 09:11:26.315219 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:26.815199121 +0000 UTC m=+209.202961669 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.318904 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5js7\" (UniqueName: \"kubernetes.io/projected/77e89f04-a4eb-4d42-a2df-3e4b266e7f85-kube-api-access-m5js7\") pod \"apiserver-7bbb656c7d-5tb67\" (UID: \"77e89f04-a4eb-4d42-a2df-3e4b266e7f85\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5tb67" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.319426 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm92h\" (UniqueName: \"kubernetes.io/projected/96265d5d-2168-484b-9a0d-b0813a2defa1-kube-api-access-xm92h\") pod \"dns-default-c9nn5\" (UID: \"96265d5d-2168-484b-9a0d-b0813a2defa1\") " pod="openshift-dns/dns-default-c9nn5" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.320758 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-54fq8" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.331450 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mv4t2" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.345003 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5nf7" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.346958 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx8tq\" (UniqueName: \"kubernetes.io/projected/47ca7f6c-9012-4a3e-997b-5655cf70ce1a-kube-api-access-nx8tq\") pod \"olm-operator-6b444d44fb-pjhzd\" (UID: \"47ca7f6c-9012-4a3e-997b-5655cf70ce1a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjhzd" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.357418 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-bm7pg" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.373359 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-c9nn5" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.375376 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-zqsnf" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.419190 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:26 crc kubenswrapper[4822]: E0224 09:11:26.419502 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:26.91949093 +0000 UTC m=+209.307253468 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.430719 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-6shfw"] Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.468693 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pqs2"] Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.490484 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fbzp9"] Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.520595 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:26 crc kubenswrapper[4822]: E0224 09:11:26.521069 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:27.021051598 +0000 UTC m=+209.408814136 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.531803 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-5nmd4" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.542868 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vct48" event={"ID":"751981f5-4bd9-42fd-888e-2407c6a197ca","Type":"ContainerStarted","Data":"2f7d2645e8295c8ada1025aadba9b33ca3b003c0bffc24fc2e256afad93f1fcb"} Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.545416 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.546764 4822 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-vct48 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.24:6443/healthz\": dial tcp 10.217.0.24:6443: connect: connection refused" start-of-body= Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.547288 4822 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-vct48" podUID="751981f5-4bd9-42fd-888e-2407c6a197ca" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.24:6443/healthz\": dial tcp 10.217.0.24:6443: connect: connection refused" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.546903 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5tb67" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.547800 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-c6srg" event={"ID":"e83cbf09-f579-43b8-b1f5-bf43c477d342","Type":"ContainerStarted","Data":"d1bb07a55c8204e9c34e31c7341c7db8904430f1dcc3722c41e518bbd3b5d12f"} Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.547832 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-c6srg" event={"ID":"e83cbf09-f579-43b8-b1f5-bf43c477d342","Type":"ContainerStarted","Data":"3fc5664d96eeb8b021d8da63c7ac1984333273b48609cb9de14a64a7ab86de8c"} Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.547842 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-c6srg" event={"ID":"e83cbf09-f579-43b8-b1f5-bf43c477d342","Type":"ContainerStarted","Data":"bff773f4275c08135a103e8115970e725a5524c0174a14e8fa5f71714b0f2441"} Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.550532 4822 generic.go:334] "Generic (PLEG): container finished" podID="e1dc4f3f-07b4-40e4-b324-3bce1b98b132" containerID="4ae014d800f12b61e8a4d6ef9a9a57a852e5b102f88c34834c955865c7ef7ccb" exitCode=0 Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.550567 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vhssf" event={"ID":"e1dc4f3f-07b4-40e4-b324-3bce1b98b132","Type":"ContainerDied","Data":"4ae014d800f12b61e8a4d6ef9a9a57a852e5b102f88c34834c955865c7ef7ccb"} Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.550583 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vhssf" event={"ID":"e1dc4f3f-07b4-40e4-b324-3bce1b98b132","Type":"ContainerStarted","Data":"3b7ab1ddbae111ab15b33b46b35a104e79c533fef0e21eb996296e21c62be0b7"} Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.554097 4822 generic.go:334] "Generic (PLEG): container finished" podID="8d4f7e76-5492-4aff-ac8d-ead257d6d9d6" containerID="b232364bb85e4d01b3f3154d62eca86e3c91c581db56e29a6c70cd0ade297007" exitCode=0 Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.554162 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-csmqv" event={"ID":"8d4f7e76-5492-4aff-ac8d-ead257d6d9d6","Type":"ContainerDied","Data":"b232364bb85e4d01b3f3154d62eca86e3c91c581db56e29a6c70cd0ade297007"} Feb 24 09:11:26 crc kubenswrapper[4822]: W0224 09:11:26.562613 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod996cd1b1_5fb9_448d_bcb6_933ee6c0a31a.slice/crio-45db4a43690ba9ef6cb516fb66bb9a4d05f3c02fdca16285c4e6a356d688dedb WatchSource:0}: Error finding container 45db4a43690ba9ef6cb516fb66bb9a4d05f3c02fdca16285c4e6a356d688dedb: Status 404 returned error can't find the container with id 45db4a43690ba9ef6cb516fb66bb9a4d05f3c02fdca16285c4e6a356d688dedb Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.562696 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4qd6h" event={"ID":"68d26dd2-7abb-404f-a8e5-869816830496","Type":"ContainerStarted","Data":"c27c673e7460387692ba0f73a4983173d630265a89ff2839f60b2960defd29cd"} Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.562741 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4qd6h" event={"ID":"68d26dd2-7abb-404f-a8e5-869816830496","Type":"ContainerStarted","Data":"ab7a9bba944fe9ef9dd1d991dd487a8d059914c58b4abb93b76eaa8944e565a5"} Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.568438 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r6nlr" event={"ID":"0ece2924-64f7-4738-9049-4ac6fa8dcd75","Type":"ContainerStarted","Data":"b75176eebdd915b7b8c3cff7068a0a3b3c96406aa80faba7fb9214777efa8e71"} Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.568474 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r6nlr" event={"ID":"0ece2924-64f7-4738-9049-4ac6fa8dcd75","Type":"ContainerStarted","Data":"96c1d4974e950cdeb379e3f03c9c025b09f0f6d25cd4b6bd035844fa0928061e"} Feb 24 09:11:26 crc kubenswrapper[4822]: W0224 09:11:26.574379 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d142bb1_7f07_498b_a6d6_d378bb619c22.slice/crio-25bad4b3dac94d3e405ad2b13d3f0253731a9587c54e0ae3700842c33307dcdf WatchSource:0}: Error finding container 25bad4b3dac94d3e405ad2b13d3f0253731a9587c54e0ae3700842c33307dcdf: Status 404 returned error can't find the container with id 25bad4b3dac94d3e405ad2b13d3f0253731a9587c54e0ae3700842c33307dcdf Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.586257 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjhzd" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.600132 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-z895m" event={"ID":"8cd11009-44d6-4539-b702-958f388fc85e","Type":"ContainerStarted","Data":"9bafbceb6f22e770a5bdda5ddacac986382413a739004a7e9a5bc93224cbe5be"} Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.600174 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-z895m" event={"ID":"8cd11009-44d6-4539-b702-958f388fc85e","Type":"ContainerStarted","Data":"db7c8fd32b1a7035d8bcc7f7a27e57c1c7c680dd78134b1ae77f8bee61cd16d4"} Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.600794 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-z895m" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.601469 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5zmb" event={"ID":"cbc6b075-6968-4f08-9c46-f713a9a05672","Type":"ContainerStarted","Data":"a6fdc1f2639d5417651658055177fca40fb9a7ad5969ebd49c1915a578e888f8"} Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.611593 4822 patch_prober.go:28] interesting pod/downloads-7954f5f757-z895m container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.611646 4822 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-z895m" podUID="8cd11009-44d6-4539-b702-958f388fc85e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.619200 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rtmmj" event={"ID":"8f691718-37cb-46da-aece-6173ba2ad129","Type":"ContainerStarted","Data":"4eafbb27d81017e1c6f914b50942e8dfc59763545cdba502dab9a251bed1b495"} Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.619310 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rtmmj" event={"ID":"8f691718-37cb-46da-aece-6173ba2ad129","Type":"ContainerStarted","Data":"662375af4e6ec6ab820fc49f886950d05cd6ea25ee03edc9a8884cc5e32e511e"} Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.619371 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rtmmj" event={"ID":"8f691718-37cb-46da-aece-6173ba2ad129","Type":"ContainerStarted","Data":"b3000dab00ca90bfc002aa822ca7292669048e6484eca4cc073256413c027c42"} Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.621962 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:26 crc kubenswrapper[4822]: E0224 09:11:26.622353 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:27.122338188 +0000 UTC m=+209.510100736 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.636180 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rw92h" event={"ID":"02782b47-e00a-4585-9f89-4fe9585931e5","Type":"ContainerStarted","Data":"c6d5504ab934f335db77241bc3c2e8cfd23e7a2a0849598d36cd9cc36642a7cd"} Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.636235 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rw92h" event={"ID":"02782b47-e00a-4585-9f89-4fe9585931e5","Type":"ContainerStarted","Data":"23a16f2cec2658ecfcaff5c383ca20d73b91acd241b2df7cac83b19626182531"} Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.636384 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-rw92h" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.637143 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xchvv" event={"ID":"68be5d27-605e-4f51-acaf-5e97915dd673","Type":"ContainerStarted","Data":"57969e68dbab069d8d03c95b2e97efcd6983488a2ac439f509893ed2730b98fe"} Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.638328 4822 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-rw92h container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.638369 4822 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-rw92h" podUID="02782b47-e00a-4585-9f89-4fe9585931e5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.641417 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wtvhk" event={"ID":"6985f527-bfbe-45cc-84d4-cf56c4ec06fd","Type":"ContainerStarted","Data":"2cfffaa9a8b7a75fd7ed0d3d3b78375e8d60a454c3c921c0049ed8c20b5c921b"} Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.661963 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4lh4k" event={"ID":"24f77631-43b1-41e9-81ad-93134998b71e","Type":"ContainerStarted","Data":"fc00b327b5d7f89f1c868c404e7bf353e61d329501c773c91b1ea70366860a13"} Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.663493 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2m726" event={"ID":"d79f3f30-0efc-4f81-86a8-8a348431af9e","Type":"ContainerStarted","Data":"439186f46ce6aac4e187fd2db5f87105832225f04daf31799d00dcc844fa38ba"} Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.663512 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2m726" event={"ID":"d79f3f30-0efc-4f81-86a8-8a348431af9e","Type":"ContainerStarted","Data":"43ba737836198fd6c345e10fa199eb9ba947e1954c96c0a9ee47fef7c9f683b4"} Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.664506 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2m726" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.667200 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-8sr4g" event={"ID":"5f1841b8-2101-467d-9d66-f4a372088e0c","Type":"ContainerStarted","Data":"ad868c3604ac6d1b2584088b148413a61890304e74362987bfea78cf7fbfc900"} Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.668040 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sb8fk" event={"ID":"4fa3a04c-00d8-43f5-9486-a03ea57167df","Type":"ContainerStarted","Data":"261e07d91d66d821aacc88170126925218c5c7ba8fb1b96a1e9946800fdd5ab8"} Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.670336 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tqmq8" event={"ID":"9d179d56-8a1c-41b7-aa7d-6ef20b61c2e7","Type":"ContainerStarted","Data":"e649007c269249080bbd94fc9a27c91e65430e5240e4ede5dda1fd33c45a2ec2"} Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.672312 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-dwqqh" event={"ID":"2b6b8486-799e-43df-beec-2c086d6411d1","Type":"ContainerStarted","Data":"30c490061bd483421679ed9490b9cfb1e139cc381ab8620ceb0a796962ff0f06"} Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.675096 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d4kcw" event={"ID":"5430e137-6e26-43f5-bd31-7a2c83e9997c","Type":"ContainerStarted","Data":"55c3962f027f471b01b690d9d0a9f3e8c7e7393aaa4eca66f8eb823ff36f460c"} Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.680491 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dqq5s" event={"ID":"159468c1-5e81-4bd2-8696-32a9c896b2ba","Type":"ContainerStarted","Data":"a9b6c8e319bee8a8bc8624f03759910e40166929daa096a344bc0bfdf2a692bf"} Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.716631 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4vkrp"] Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.722594 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:26 crc kubenswrapper[4822]: E0224 09:11:26.722738 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:27.222714872 +0000 UTC m=+209.610477420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.723222 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:26 crc kubenswrapper[4822]: E0224 09:11:26.727745 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:27.22773092 +0000 UTC m=+209.615493468 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.733949 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-htbq4" podStartSLOduration=163.733935083 podStartE2EDuration="2m43.733935083s" podCreationTimestamp="2026-02-24 09:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:26.732149001 +0000 UTC m=+209.119911549" watchObservedRunningTime="2026-02-24 09:11:26.733935083 +0000 UTC m=+209.121697631" Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.784968 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-bczvj"] Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.825898 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:26 crc kubenswrapper[4822]: E0224 09:11:26.827487 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:27.327463214 +0000 UTC m=+209.715225762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.847181 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vbcpz"] Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.871337 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jr8gq"] Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.887112 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cdns9"] Feb 24 09:11:26 crc kubenswrapper[4822]: I0224 09:11:26.929126 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:26 crc kubenswrapper[4822]: E0224 09:11:26.929445 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:27.429433545 +0000 UTC m=+209.817196093 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.030295 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:27 crc kubenswrapper[4822]: E0224 09:11:27.030544 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:27.530529639 +0000 UTC m=+209.918292187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.131685 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:27 crc kubenswrapper[4822]: E0224 09:11:27.132036 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:27.632024095 +0000 UTC m=+210.019786643 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.232533 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:27 crc kubenswrapper[4822]: E0224 09:11:27.233152 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:27.733137161 +0000 UTC m=+210.120899709 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.245151 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2m726" Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.282769 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532060-nsnkd"] Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.335237 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:27 crc kubenswrapper[4822]: E0224 09:11:27.335509 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:27.835497013 +0000 UTC m=+210.223259561 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.437304 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:27 crc kubenswrapper[4822]: E0224 09:11:27.438007 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:27.937992389 +0000 UTC m=+210.325754937 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.552627 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:27 crc kubenswrapper[4822]: E0224 09:11:27.553107 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:28.053096148 +0000 UTC m=+210.440858696 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.653205 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-54fq8"] Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.653516 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:27 crc kubenswrapper[4822]: E0224 09:11:27.654127 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:28.154086569 +0000 UTC m=+210.541849117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.727853 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-dwqqh" event={"ID":"2b6b8486-799e-43df-beec-2c086d6411d1","Type":"ContainerStarted","Data":"f3b63d573eef902a8948745c2672b364d1734dddd4eb596c08b3b960ac5eeead"} Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.729532 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58710: no serving certificate available for the kubelet" Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.756342 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:27 crc kubenswrapper[4822]: E0224 09:11:27.757877 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:28.257863033 +0000 UTC m=+210.645625581 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.820742 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjhzd"] Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.837825 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58712: no serving certificate available for the kubelet" Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.860052 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:27 crc kubenswrapper[4822]: E0224 09:11:27.860352 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:28.360336148 +0000 UTC m=+210.748098696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.865984 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tqmq8" event={"ID":"9d179d56-8a1c-41b7-aa7d-6ef20b61c2e7","Type":"ContainerStarted","Data":"dbef27b34d8d63fb478ee0b9f9f6ac6c6a670ac241c4765731b117d0005160b4"} Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.874992 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4rnpp"] Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.877769 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cdns9" event={"ID":"6ad08e0e-e2f4-43af-90e2-f449237c358b","Type":"ContainerStarted","Data":"5a4397f89f932d3b61da5a49c3dfced0d39782e3248fd957b0d2bece9458fbd7"} Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.889140 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xchvv" event={"ID":"68be5d27-605e-4f51-acaf-5e97915dd673","Type":"ContainerStarted","Data":"1045f19692bba39ae4ae05ed5c31244d8c7b10db30914c45516c291b17af760c"} Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.903777 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5nf7"] Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.905053 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-4qd6h" podStartSLOduration=4.905030098 podStartE2EDuration="4.905030098s" podCreationTimestamp="2026-02-24 09:11:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:27.899473383 +0000 UTC m=+210.287235931" watchObservedRunningTime="2026-02-24 09:11:27.905030098 +0000 UTC m=+210.292792646" Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.909857 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wtvhk" event={"ID":"6985f527-bfbe-45cc-84d4-cf56c4ec06fd","Type":"ContainerStarted","Data":"44afd41d55864565882c8c287a6fce29f2b30323ce53b4b870ef7a0b756356c1"} Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.918221 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-bm7pg"] Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.918877 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pqs2" event={"ID":"20c8ba77-3eff-4eb0-9b14-c7c44ef225d9","Type":"ContainerStarted","Data":"23924dae7da204c381f536378533608a3eaba73a74fced7f723371a7f3432bf7"} Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.927284 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dqq5s" event={"ID":"159468c1-5e81-4bd2-8696-32a9c896b2ba","Type":"ContainerStarted","Data":"6f00efc02594809cfffd266c86be2233d8e7cc40b72aac7f79b2ea762b213e96"} Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.931982 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-5nmd4"] Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.933447 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mv4t2"] Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.936981 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58716: no serving certificate available for the kubelet" Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.937102 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-zqsnf" event={"ID":"a336a49d-01a4-421c-8168-54bf7283e5a6","Type":"ContainerStarted","Data":"3f1ca3c0a029967feab1a9df21f4ba851a643c87e1ab46a8b919f657746b3707"} Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.943640 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-czbsv"] Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.947421 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-dwqqh" podStartSLOduration=164.947410119 podStartE2EDuration="2m44.947410119s" podCreationTimestamp="2026-02-24 09:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:27.946156492 +0000 UTC m=+210.333919040" watchObservedRunningTime="2026-02-24 09:11:27.947410119 +0000 UTC m=+210.335172667" Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.954924 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jr8gq" event={"ID":"d5a6bd57-4bb7-45b4-8451-27e28ee580a5","Type":"ContainerStarted","Data":"569c36113f3b97c4d1c01e29f8bd3d038b5c2331f67b0776d5fb1cc636785a10"} Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.961094 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:27 crc kubenswrapper[4822]: E0224 09:11:27.961409 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:28.461393552 +0000 UTC m=+210.849156100 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.961816 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5lckc" event={"ID":"0d142bb1-7f07-498b-a6d6-d378bb619c22","Type":"ContainerStarted","Data":"25bad4b3dac94d3e405ad2b13d3f0253731a9587c54e0ae3700842c33307dcdf"} Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.963087 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vbcpz" event={"ID":"e7261732-4026-4400-8d11-1bc189c8be83","Type":"ContainerStarted","Data":"629546e1c50ff13dec1f24294dbdaaa9b935a3f60940c8282c2890777d9b0c90"} Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.964268 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6shfw" event={"ID":"996cd1b1-5fb9-448d-bcb6-933ee6c0a31a","Type":"ContainerStarted","Data":"45db4a43690ba9ef6cb516fb66bb9a4d05f3c02fdca16285c4e6a356d688dedb"} Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.964986 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-fbzp9" event={"ID":"5c24d6b1-4978-4e08-acc6-e0193fead51a","Type":"ContainerStarted","Data":"496f386b50d097e603db8b563d08cc45b8ad1e6b2c4effb0ab0561b638d08d03"} Feb 24 09:11:27 crc kubenswrapper[4822]: W0224 09:11:27.965340 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47ca7f6c_9012_4a3e_997b_5655cf70ce1a.slice/crio-273163dd23521a930a41f45c0f044dbaf14636a7c4ee8cf100f32d8ea9286819 WatchSource:0}: Error finding container 273163dd23521a930a41f45c0f044dbaf14636a7c4ee8cf100f32d8ea9286819: Status 404 returned error can't find the container with id 273163dd23521a930a41f45c0f044dbaf14636a7c4ee8cf100f32d8ea9286819 Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.965693 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532060-nsnkd" event={"ID":"9a50eba2-73f2-4dcb-83a6-a1375a07be13","Type":"ContainerStarted","Data":"dc71f583bac1eb90cdeb6daa92910aa5c5c6d2206020105013b13713e7985281"} Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.966322 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4vkrp" event={"ID":"a47f5223-0799-4d25-be03-13fd76101150","Type":"ContainerStarted","Data":"8511a27f3a6913dd7e4a08d6cb360f0ba07d3c95f321b08edbe4635fdef3a3cb"} Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.966972 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-bczvj" event={"ID":"9d1a783f-2b4b-4a8a-b2d3-4e204b2f99c7","Type":"ContainerStarted","Data":"514382ad7ccbb80cde618fb2e7ee550f50f0d9ad80fc74b19af15bff445e5f9b"} Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.969950 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4lh4k" event={"ID":"24f77631-43b1-41e9-81ad-93134998b71e","Type":"ContainerStarted","Data":"80562bd2f395b9bde8022081bfdc463b267af31c072049d4c3282975d3f5eb7e"} Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.970103 4822 patch_prober.go:28] interesting pod/downloads-7954f5f757-z895m container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.970151 4822 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-z895m" podUID="8cd11009-44d6-4539-b702-958f388fc85e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.975103 4822 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-rw92h container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.975169 4822 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-rw92h" podUID="02782b47-e00a-4585-9f89-4fe9585931e5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.979277 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:11:27 crc kubenswrapper[4822]: I0224 09:11:27.985162 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2m726" podStartSLOduration=164.985140133 podStartE2EDuration="2m44.985140133s" podCreationTimestamp="2026-02-24 09:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:27.976464027 +0000 UTC m=+210.364226575" watchObservedRunningTime="2026-02-24 09:11:27.985140133 +0000 UTC m=+210.372902681" Feb 24 09:11:27 crc kubenswrapper[4822]: W0224 09:11:27.996176 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dce3994_c9ed_4dee_8614_c58312656132.slice/crio-5f8739b6cfbe12f42405d1f60acd8014f0fbd6aa0a2f7a596dd0acc17e4abd26 WatchSource:0}: Error finding container 5f8739b6cfbe12f42405d1f60acd8014f0fbd6aa0a2f7a596dd0acc17e4abd26: Status 404 returned error can't find the container with id 5f8739b6cfbe12f42405d1f60acd8014f0fbd6aa0a2f7a596dd0acc17e4abd26 Feb 24 09:11:28 crc kubenswrapper[4822]: I0224 09:11:28.038343 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-c9nn5"] Feb 24 09:11:28 crc kubenswrapper[4822]: I0224 09:11:28.040671 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58732: no serving certificate available for the kubelet" Feb 24 09:11:28 crc kubenswrapper[4822]: I0224 09:11:28.062105 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:28 crc kubenswrapper[4822]: E0224 09:11:28.062377 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:28.562362993 +0000 UTC m=+210.950125541 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:28 crc kubenswrapper[4822]: I0224 09:11:28.069281 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-z895m" podStartSLOduration=166.069264717 podStartE2EDuration="2m46.069264717s" podCreationTimestamp="2026-02-24 09:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:28.068158994 +0000 UTC m=+210.455921552" watchObservedRunningTime="2026-02-24 09:11:28.069264717 +0000 UTC m=+210.457027275" Feb 24 09:11:28 crc kubenswrapper[4822]: I0224 09:11:28.072586 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-5tb67"] Feb 24 09:11:28 crc kubenswrapper[4822]: W0224 09:11:28.091437 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96265d5d_2168_484b_9a0d_b0813a2defa1.slice/crio-212a8b8c126fb5a721e766f0ada848de0d61e9fb7fd72afcb2dd60812cd725d7 WatchSource:0}: Error finding container 212a8b8c126fb5a721e766f0ada848de0d61e9fb7fd72afcb2dd60812cd725d7: Status 404 returned error can't find the container with id 212a8b8c126fb5a721e766f0ada848de0d61e9fb7fd72afcb2dd60812cd725d7 Feb 24 09:11:28 crc kubenswrapper[4822]: I0224 09:11:28.139423 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58748: no serving certificate available for the kubelet" Feb 24 09:11:28 crc kubenswrapper[4822]: I0224 09:11:28.164844 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:28 crc kubenswrapper[4822]: E0224 09:11:28.168481 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:28.668465365 +0000 UTC m=+211.056227913 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:28 crc kubenswrapper[4822]: I0224 09:11:28.176548 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-8sr4g" podStartSLOduration=166.176529103 podStartE2EDuration="2m46.176529103s" podCreationTimestamp="2026-02-24 09:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:28.169814176 +0000 UTC m=+210.557576724" watchObservedRunningTime="2026-02-24 09:11:28.176529103 +0000 UTC m=+210.564291651" Feb 24 09:11:28 crc kubenswrapper[4822]: I0224 09:11:28.177213 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-rw92h" podStartSLOduration=165.177208323 podStartE2EDuration="2m45.177208323s" podCreationTimestamp="2026-02-24 09:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:28.101203129 +0000 UTC m=+210.488965677" watchObservedRunningTime="2026-02-24 09:11:28.177208323 +0000 UTC m=+210.564970871" Feb 24 09:11:28 crc kubenswrapper[4822]: I0224 09:11:28.240376 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58760: no serving certificate available for the kubelet" Feb 24 09:11:28 crc kubenswrapper[4822]: I0224 09:11:28.257141 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r6nlr" podStartSLOduration=166.257122213 podStartE2EDuration="2m46.257122213s" podCreationTimestamp="2026-02-24 09:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:28.218167763 +0000 UTC m=+210.605930311" watchObservedRunningTime="2026-02-24 09:11:28.257122213 +0000 UTC m=+210.644884761" Feb 24 09:11:28 crc kubenswrapper[4822]: I0224 09:11:28.282286 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:28 crc kubenswrapper[4822]: E0224 09:11:28.282799 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:28.782783681 +0000 UTC m=+211.170546229 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:28 crc kubenswrapper[4822]: I0224 09:11:28.322710 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-c6srg" podStartSLOduration=165.322690079 podStartE2EDuration="2m45.322690079s" podCreationTimestamp="2026-02-24 09:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:28.283380678 +0000 UTC m=+210.671143236" watchObservedRunningTime="2026-02-24 09:11:28.322690079 +0000 UTC m=+210.710452627" Feb 24 09:11:28 crc kubenswrapper[4822]: I0224 09:11:28.364469 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58764: no serving certificate available for the kubelet" Feb 24 09:11:28 crc kubenswrapper[4822]: I0224 09:11:28.386816 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d4kcw" podStartSLOduration=166.386799041 podStartE2EDuration="2m46.386799041s" podCreationTimestamp="2026-02-24 09:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:28.321221216 +0000 UTC m=+210.708983764" watchObservedRunningTime="2026-02-24 09:11:28.386799041 +0000 UTC m=+210.774561589" Feb 24 09:11:28 crc kubenswrapper[4822]: I0224 09:11:28.388176 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:28 crc kubenswrapper[4822]: E0224 09:11:28.388456 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:28.88844549 +0000 UTC m=+211.276208038 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:28 crc kubenswrapper[4822]: I0224 09:11:28.409432 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rtmmj" podStartSLOduration=165.409417759 podStartE2EDuration="2m45.409417759s" podCreationTimestamp="2026-02-24 09:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:28.407327638 +0000 UTC m=+210.795090186" watchObservedRunningTime="2026-02-24 09:11:28.409417759 +0000 UTC m=+210.797180307" Feb 24 09:11:28 crc kubenswrapper[4822]: I0224 09:11:28.448478 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-vct48" podStartSLOduration=166.448456792 podStartE2EDuration="2m46.448456792s" podCreationTimestamp="2026-02-24 09:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:28.438725415 +0000 UTC m=+210.826487963" watchObservedRunningTime="2026-02-24 09:11:28.448456792 +0000 UTC m=+210.836219340" Feb 24 09:11:28 crc kubenswrapper[4822]: I0224 09:11:28.498557 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:28 crc kubenswrapper[4822]: E0224 09:11:28.498830 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:28.998806818 +0000 UTC m=+211.386569366 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:28 crc kubenswrapper[4822]: I0224 09:11:28.499265 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:28 crc kubenswrapper[4822]: E0224 09:11:28.499557 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:28.99954511 +0000 UTC m=+211.387307658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:28 crc kubenswrapper[4822]: I0224 09:11:28.533362 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58774: no serving certificate available for the kubelet" Feb 24 09:11:28 crc kubenswrapper[4822]: I0224 09:11:28.534034 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4lh4k" podStartSLOduration=165.534016408 podStartE2EDuration="2m45.534016408s" podCreationTimestamp="2026-02-24 09:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:28.523686823 +0000 UTC m=+210.911449371" watchObservedRunningTime="2026-02-24 09:11:28.534016408 +0000 UTC m=+210.921778956" Feb 24 09:11:28 crc kubenswrapper[4822]: I0224 09:11:28.558635 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dqq5s" podStartSLOduration=165.558617385 podStartE2EDuration="2m45.558617385s" podCreationTimestamp="2026-02-24 09:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:28.557846681 +0000 UTC m=+210.945609229" watchObservedRunningTime="2026-02-24 09:11:28.558617385 +0000 UTC m=+210.946379933" Feb 24 09:11:28 crc kubenswrapper[4822]: I0224 09:11:28.600681 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:28 crc kubenswrapper[4822]: E0224 09:11:28.600992 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:29.100894543 +0000 UTC m=+211.488657101 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:28 crc kubenswrapper[4822]: I0224 09:11:28.601157 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:28 crc kubenswrapper[4822]: E0224 09:11:28.601577 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:29.101562712 +0000 UTC m=+211.489325260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:28 crc kubenswrapper[4822]: I0224 09:11:28.621841 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tqmq8" podStartSLOduration=165.62181418 podStartE2EDuration="2m45.62181418s" podCreationTimestamp="2026-02-24 09:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:28.579395458 +0000 UTC m=+210.967158006" watchObservedRunningTime="2026-02-24 09:11:28.62181418 +0000 UTC m=+211.009576728" Feb 24 09:11:28 crc kubenswrapper[4822]: I0224 09:11:28.702181 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:28 crc kubenswrapper[4822]: E0224 09:11:28.702467 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:29.202452771 +0000 UTC m=+211.590215319 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:28 crc kubenswrapper[4822]: I0224 09:11:28.803353 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:28 crc kubenswrapper[4822]: E0224 09:11:28.810358 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:29.310336696 +0000 UTC m=+211.698099244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:28 crc kubenswrapper[4822]: I0224 09:11:28.904474 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:28 crc kubenswrapper[4822]: E0224 09:11:28.904823 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:29.404809495 +0000 UTC m=+211.792572043 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:28 crc kubenswrapper[4822]: I0224 09:11:28.992720 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sb8fk" event={"ID":"4fa3a04c-00d8-43f5-9486-a03ea57167df","Type":"ContainerStarted","Data":"aadabae89447a183ce86eeefc38788d1b544eb03f47f62cb34a8ec0e6e5214a4"} Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.006602 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:29 crc kubenswrapper[4822]: E0224 09:11:29.006949 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:29.506937341 +0000 UTC m=+211.894699889 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.011599 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-c9nn5" event={"ID":"96265d5d-2168-484b-9a0d-b0813a2defa1","Type":"ContainerStarted","Data":"3ff33631ccc58d101d160e123fb2451503c62ce18fc5af57033c82852baacb96"} Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.011640 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-c9nn5" event={"ID":"96265d5d-2168-484b-9a0d-b0813a2defa1","Type":"ContainerStarted","Data":"212a8b8c126fb5a721e766f0ada848de0d61e9fb7fd72afcb2dd60812cd725d7"} Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.013251 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sb8fk" podStartSLOduration=166.013233187 podStartE2EDuration="2m46.013233187s" podCreationTimestamp="2026-02-24 09:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:29.011851936 +0000 UTC m=+211.399614484" watchObservedRunningTime="2026-02-24 09:11:29.013233187 +0000 UTC m=+211.400995735" Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.014993 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pqs2" event={"ID":"20c8ba77-3eff-4eb0-9b14-c7c44ef225d9","Type":"ContainerStarted","Data":"c7cc459901d452a3266fb49712febe682ed8454c81c4d1c3a26de554085475f3"} Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.015020 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pqs2" event={"ID":"20c8ba77-3eff-4eb0-9b14-c7c44ef225d9","Type":"ContainerStarted","Data":"0f4bcd556e2816b20b708b8520043ad45655b79644a92cb944c7aecb93e06cf6"} Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.059187 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-fbzp9" event={"ID":"5c24d6b1-4978-4e08-acc6-e0193fead51a","Type":"ContainerStarted","Data":"f3eab0c71c85c2e314709ca308840902c9a72ce4b97deea44612852d3d2c6624"} Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.059804 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-fbzp9" Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.071264 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pqs2" podStartSLOduration=167.07124219 podStartE2EDuration="2m47.07124219s" podCreationTimestamp="2026-02-24 09:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:29.069387505 +0000 UTC m=+211.457150053" watchObservedRunningTime="2026-02-24 09:11:29.07124219 +0000 UTC m=+211.459004738" Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.074165 4822 patch_prober.go:28] interesting pod/console-operator-58897d9998-fbzp9 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.074209 4822 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-fbzp9" podUID="5c24d6b1-4978-4e08-acc6-e0193fead51a" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.089524 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-bm7pg" event={"ID":"cf74c55b-847f-48f1-8b07-7885817ac0ce","Type":"ContainerStarted","Data":"53520749c0a979caa896bfc7dffff5bd3b194a4f6b3bcd8375ec9b5a8ce93db8"} Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.089558 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-bm7pg" event={"ID":"cf74c55b-847f-48f1-8b07-7885817ac0ce","Type":"ContainerStarted","Data":"71d8f150bf44d225b08952870062dd1d207be7ac523054fea8571d93863e2ab1"} Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.119044 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:29 crc kubenswrapper[4822]: E0224 09:11:29.119176 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:29.619159244 +0000 UTC m=+212.006921792 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.119433 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:29 crc kubenswrapper[4822]: E0224 09:11:29.120464 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:29.620450132 +0000 UTC m=+212.008212680 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.122227 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-fbzp9" podStartSLOduration=167.122215625 podStartE2EDuration="2m47.122215625s" podCreationTimestamp="2026-02-24 09:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:29.089395035 +0000 UTC m=+211.477157583" watchObservedRunningTime="2026-02-24 09:11:29.122215625 +0000 UTC m=+211.509978173" Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.135342 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-bczvj" event={"ID":"9d1a783f-2b4b-4a8a-b2d3-4e204b2f99c7","Type":"ContainerStarted","Data":"152b8137b7e160aad139d29373de5f2f3e2c5429ae72939c7376ed1d65c3bad3"} Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.139486 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-bm7pg" podStartSLOduration=166.139461744 podStartE2EDuration="2m46.139461744s" podCreationTimestamp="2026-02-24 09:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:29.130130678 +0000 UTC m=+211.517893236" watchObservedRunningTime="2026-02-24 09:11:29.139461744 +0000 UTC m=+211.527224292" Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.147424 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mv4t2" event={"ID":"6503db63-f800-40ec-bf51-12601462d7c7","Type":"ContainerStarted","Data":"95f666a95a933a0815e22fbc23632fe65b865dbc5440b7cc2393f1204ddf5161"} Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.147468 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mv4t2" event={"ID":"6503db63-f800-40ec-bf51-12601462d7c7","Type":"ContainerStarted","Data":"c3207c8cfff28a06105c8ff736ca6a3b6171667d4ffeb8f506f5e5f04d2ec7fc"} Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.148015 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mv4t2" Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.156643 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jr8gq" event={"ID":"d5a6bd57-4bb7-45b4-8451-27e28ee580a5","Type":"ContainerStarted","Data":"348127274f5e81e71b3ee82ea325fe3bb17b1704b3fa4e97752d32caeb9cad90"} Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.156935 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-jr8gq" Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.158715 4822 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-jr8gq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/healthz\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.158749 4822 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-jr8gq" podUID="d5a6bd57-4bb7-45b4-8451-27e28ee580a5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.17:8080/healthz\": dial tcp 10.217.0.17:8080: connect: connection refused" Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.161055 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xchvv" event={"ID":"68be5d27-605e-4f51-acaf-5e97915dd673","Type":"ContainerStarted","Data":"b1ec7656cebbe90ad1f44e6a66052e3bd28fa1a47fda3d75349156f299bb861f"} Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.177055 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vhssf" event={"ID":"e1dc4f3f-07b4-40e4-b324-3bce1b98b132","Type":"ContainerStarted","Data":"eff396951a883e032cf505cf4924f7fa19655a9c0c7a0eac4fdbaff82444230d"} Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.177100 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vhssf" event={"ID":"e1dc4f3f-07b4-40e4-b324-3bce1b98b132","Type":"ContainerStarted","Data":"523f302f7187182ee698999043192073e828cee61e4f027635ab5a203ec8bc77"} Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.200583 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjhzd" event={"ID":"47ca7f6c-9012-4a3e-997b-5655cf70ce1a","Type":"ContainerStarted","Data":"cdd332e1bed2fe7c3919681fc4cbc56f98c0e9fb9610f9db108ae7efacc26ae0"} Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.200630 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjhzd" event={"ID":"47ca7f6c-9012-4a3e-997b-5655cf70ce1a","Type":"ContainerStarted","Data":"273163dd23521a930a41f45c0f044dbaf14636a7c4ee8cf100f32d8ea9286819"} Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.201441 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjhzd" Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.203837 4822 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-pjhzd container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.203884 4822 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjhzd" podUID="47ca7f6c-9012-4a3e-997b-5655cf70ce1a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.219478 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mv4t2" podStartSLOduration=166.219454675 podStartE2EDuration="2m46.219454675s" podCreationTimestamp="2026-02-24 09:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:29.172237972 +0000 UTC m=+211.560000520" watchObservedRunningTime="2026-02-24 09:11:29.219454675 +0000 UTC m=+211.607217223" Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.221245 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:29 crc kubenswrapper[4822]: E0224 09:11:29.221946 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:29.721932918 +0000 UTC m=+212.109695466 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.240076 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6shfw" event={"ID":"996cd1b1-5fb9-448d-bcb6-933ee6c0a31a","Type":"ContainerStarted","Data":"7f9b8dbc94185b2f1aedddcdfcc2580d58b3677c8e6c973bfae4d8d29e11c465"} Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.253009 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xchvv" podStartSLOduration=166.252995545 podStartE2EDuration="2m46.252995545s" podCreationTimestamp="2026-02-24 09:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:29.222227287 +0000 UTC m=+211.609989835" watchObservedRunningTime="2026-02-24 09:11:29.252995545 +0000 UTC m=+211.640758093" Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.261971 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4vkrp" event={"ID":"a47f5223-0799-4d25-be03-13fd76101150","Type":"ContainerStarted","Data":"b3c2e6334b1845e83a3745eba59300eef7c0babaf793ed8342a9f6ce0b3876c7"} Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.265593 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-csmqv" event={"ID":"8d4f7e76-5492-4aff-ac8d-ead257d6d9d6","Type":"ContainerStarted","Data":"c80d2c93aea6016d94576cb44966ecdcad36ca24009358f2e98600207239843f"} Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.265979 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-csmqv" Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.280814 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58790: no serving certificate available for the kubelet" Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.309220 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wtvhk" event={"ID":"6985f527-bfbe-45cc-84d4-cf56c4ec06fd","Type":"ContainerStarted","Data":"0ff5dac89e63cdc1e8ab8b2bbb2ad96f2c24461a73db94fc82bc4c9782b86aa7"} Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.315484 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-jr8gq" podStartSLOduration=166.3154689 podStartE2EDuration="2m46.3154689s" podCreationTimestamp="2026-02-24 09:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:29.252250533 +0000 UTC m=+211.640013081" watchObservedRunningTime="2026-02-24 09:11:29.3154689 +0000 UTC m=+211.703231448" Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.326697 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:29 crc kubenswrapper[4822]: E0224 09:11:29.336328 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:29.836310475 +0000 UTC m=+212.224073023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.366519 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5tb67" event={"ID":"77e89f04-a4eb-4d42-a2df-3e4b266e7f85","Type":"ContainerStarted","Data":"9faa19200694ebe4a28d3197a170733acf6c13dc1b4673bb9926195b28454cfb"} Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.381238 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-czbsv" event={"ID":"3dce3994-c9ed-4dee-8614-c58312656132","Type":"ContainerStarted","Data":"f27883ff8f846ed458e160b0060b05b622a219c318f2d29c491f6bb288e35aed"} Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.381283 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-czbsv" event={"ID":"3dce3994-c9ed-4dee-8614-c58312656132","Type":"ContainerStarted","Data":"d9ebd46a0f288147be8a4be800a7d402792881150febd6318f3adfe9e5a9c7ee"} Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.381294 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-czbsv" event={"ID":"3dce3994-c9ed-4dee-8614-c58312656132","Type":"ContainerStarted","Data":"5f8739b6cfbe12f42405d1f60acd8014f0fbd6aa0a2f7a596dd0acc17e4abd26"} Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.385164 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-6shfw" podStartSLOduration=167.385139327 podStartE2EDuration="2m47.385139327s" podCreationTimestamp="2026-02-24 09:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:29.316353596 +0000 UTC m=+211.704116144" watchObservedRunningTime="2026-02-24 09:11:29.385139327 +0000 UTC m=+211.772901865" Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.403192 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-5nmd4" event={"ID":"3e4a2d28-bb5d-47fa-9d53-e2f32307cc3d","Type":"ContainerStarted","Data":"d98a93d168745d6620aa6736a4d249d48845bd2a709b95637be111ca9eea6563"} Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.403242 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-5nmd4" event={"ID":"3e4a2d28-bb5d-47fa-9d53-e2f32307cc3d","Type":"ContainerStarted","Data":"7069013b8beb8552f9310f3ccb64cf0fc32f285c6b53f9705ad98abb7fc637cc"} Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.420347 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532060-nsnkd" event={"ID":"9a50eba2-73f2-4dcb-83a6-a1375a07be13","Type":"ContainerStarted","Data":"81b38047910613cbe7fe491f561bb0c1ab4d809c262dd448832ee4401c4b34c9"} Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.425206 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5lckc" event={"ID":"0d142bb1-7f07-498b-a6d6-d378bb619c22","Type":"ContainerStarted","Data":"b749355e393cb043eef8e5e455b2b380844d82fbf7ea76f3979fb3c769fd0932"} Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.427405 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:29 crc kubenswrapper[4822]: E0224 09:11:29.428654 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:29.928638421 +0000 UTC m=+212.316400969 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.429491 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5nf7" event={"ID":"5bad30a5-0f39-4e1f-a688-5b87f227062c","Type":"ContainerStarted","Data":"11e01713175734fbbde7d5c2e157704665de89724032a01950670736106a42f1"} Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.429531 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5nf7" event={"ID":"5bad30a5-0f39-4e1f-a688-5b87f227062c","Type":"ContainerStarted","Data":"e2adfc31ac15fe8c747903beb8845a02f1f8c6c7ff9531b516fb57491141a818"} Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.429958 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5nf7" Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.435966 4822 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-m5nf7 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" start-of-body= Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.436019 4822 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5nf7" podUID="5bad30a5-0f39-4e1f-a688-5b87f227062c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.436573 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vbcpz" event={"ID":"e7261732-4026-4400-8d11-1bc189c8be83","Type":"ContainerStarted","Data":"f49c369889eb7147f1c3f459f1572e6349b16fef88790d51a97a2e0fc103ee9a"} Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.437933 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-zqsnf" event={"ID":"a336a49d-01a4-421c-8168-54bf7283e5a6","Type":"ContainerStarted","Data":"ee5be8fdf4be64cbcc216e7c6ee0603add55db54641ddb609dff051e41475e51"} Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.444045 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-vhssf" podStartSLOduration=167.444032496 podStartE2EDuration="2m47.444032496s" podCreationTimestamp="2026-02-24 09:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:29.443799859 +0000 UTC m=+211.831562397" watchObservedRunningTime="2026-02-24 09:11:29.444032496 +0000 UTC m=+211.831795044" Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.444808 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjhzd" podStartSLOduration=166.444804029 podStartE2EDuration="2m46.444804029s" podCreationTimestamp="2026-02-24 09:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:29.388899318 +0000 UTC m=+211.776661866" watchObservedRunningTime="2026-02-24 09:11:29.444804029 +0000 UTC m=+211.832566577" Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.453608 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cdns9" event={"ID":"6ad08e0e-e2f4-43af-90e2-f449237c358b","Type":"ContainerStarted","Data":"b687d0b658b5437ef095e866526bea89846b8f31435314f9cbce221474b5cecc"} Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.453651 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cdns9" Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.454987 4822 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-cdns9 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.455034 4822 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cdns9" podUID="6ad08e0e-e2f4-43af-90e2-f449237c358b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.462184 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4rnpp" event={"ID":"cd8b0a0d-db6c-49d2-8b52-761482be3f06","Type":"ContainerStarted","Data":"9b9fb7975c003d3c572ddfaae38d063f15e221afabd0711cc7ab94aa2d0f587d"} Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.462228 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4rnpp" event={"ID":"cd8b0a0d-db6c-49d2-8b52-761482be3f06","Type":"ContainerStarted","Data":"0ad48a7a1051e88c9f2a2b80400d07d79a74c9fe90a10906e2baf06228d6e514"} Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.483551 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5zmb" event={"ID":"cbc6b075-6968-4f08-9c46-f713a9a05672","Type":"ContainerStarted","Data":"72ca46eae8e2aba65462b2e29cba2b2ea40c632a22bd10cb742981a4c73bf57c"} Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.494273 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-54fq8" event={"ID":"b2e26472-d7fa-4416-8b72-c41558ca9986","Type":"ContainerStarted","Data":"029ce848e6a025ca20957517296cde840aff51af390da4c4a3f290bc56a10ae9"} Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.495041 4822 patch_prober.go:28] interesting pod/downloads-7954f5f757-z895m container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.495084 4822 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-z895m" podUID="8cd11009-44d6-4539-b702-958f388fc85e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.518564 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-csmqv" podStartSLOduration=167.518544525 podStartE2EDuration="2m47.518544525s" podCreationTimestamp="2026-02-24 09:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:29.51664258 +0000 UTC m=+211.904405148" watchObservedRunningTime="2026-02-24 09:11:29.518544525 +0000 UTC m=+211.906307073" Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.529630 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:29 crc kubenswrapper[4822]: E0224 09:11:29.530664 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:30.030649023 +0000 UTC m=+212.418411571 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.541571 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4vkrp" podStartSLOduration=166.541554515 podStartE2EDuration="2m46.541554515s" podCreationTimestamp="2026-02-24 09:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:29.538276098 +0000 UTC m=+211.926038646" watchObservedRunningTime="2026-02-24 09:11:29.541554515 +0000 UTC m=+211.929317063" Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.587669 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29532060-nsnkd" podStartSLOduration=167.587652566 podStartE2EDuration="2m47.587652566s" podCreationTimestamp="2026-02-24 09:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:29.584137693 +0000 UTC m=+211.971900241" watchObservedRunningTime="2026-02-24 09:11:29.587652566 +0000 UTC m=+211.975415104" Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.615053 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vbcpz" podStartSLOduration=167.615036455 podStartE2EDuration="2m47.615036455s" podCreationTimestamp="2026-02-24 09:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:29.605248835 +0000 UTC m=+211.993011383" watchObservedRunningTime="2026-02-24 09:11:29.615036455 +0000 UTC m=+212.002799003" Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.632131 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:29 crc kubenswrapper[4822]: E0224 09:11:29.633783 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:30.133754887 +0000 UTC m=+212.521517435 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.683291 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-5lckc" podStartSLOduration=166.683267899 podStartE2EDuration="2m46.683267899s" podCreationTimestamp="2026-02-24 09:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:29.675279983 +0000 UTC m=+212.063042531" watchObservedRunningTime="2026-02-24 09:11:29.683267899 +0000 UTC m=+212.071030447" Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.702555 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-zqsnf" podStartSLOduration=6.702535608 podStartE2EDuration="6.702535608s" podCreationTimestamp="2026-02-24 09:11:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:29.700432106 +0000 UTC m=+212.088194654" watchObservedRunningTime="2026-02-24 09:11:29.702535608 +0000 UTC m=+212.090298156" Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.735677 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:29 crc kubenswrapper[4822]: E0224 09:11:29.736003 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:30.235991736 +0000 UTC m=+212.623754284 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.736631 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wtvhk" podStartSLOduration=166.736615464 podStartE2EDuration="2m46.736615464s" podCreationTimestamp="2026-02-24 09:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:29.735011786 +0000 UTC m=+212.122774334" watchObservedRunningTime="2026-02-24 09:11:29.736615464 +0000 UTC m=+212.124378012" Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.756251 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-czbsv" podStartSLOduration=166.756235343 podStartE2EDuration="2m46.756235343s" podCreationTimestamp="2026-02-24 09:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:29.753141552 +0000 UTC m=+212.140904100" watchObservedRunningTime="2026-02-24 09:11:29.756235343 +0000 UTC m=+212.143997891" Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.788814 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5nf7" podStartSLOduration=166.788796865 podStartE2EDuration="2m46.788796865s" podCreationTimestamp="2026-02-24 09:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:29.786179408 +0000 UTC m=+212.173941966" watchObservedRunningTime="2026-02-24 09:11:29.788796865 +0000 UTC m=+212.176559413" Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.822663 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cdns9" podStartSLOduration=166.822626244 podStartE2EDuration="2m46.822626244s" podCreationTimestamp="2026-02-24 09:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:29.813748932 +0000 UTC m=+212.201511480" watchObservedRunningTime="2026-02-24 09:11:29.822626244 +0000 UTC m=+212.210388792" Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.837059 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5zmb" podStartSLOduration=166.837044139 podStartE2EDuration="2m46.837044139s" podCreationTimestamp="2026-02-24 09:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:29.834524505 +0000 UTC m=+212.222287063" watchObservedRunningTime="2026-02-24 09:11:29.837044139 +0000 UTC m=+212.224806687" Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.839484 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:29 crc kubenswrapper[4822]: E0224 09:11:29.839879 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:30.339863272 +0000 UTC m=+212.727625820 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.870868 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4rnpp" podStartSLOduration=166.870851797 podStartE2EDuration="2m46.870851797s" podCreationTimestamp="2026-02-24 09:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:29.869813637 +0000 UTC m=+212.257576185" watchObservedRunningTime="2026-02-24 09:11:29.870851797 +0000 UTC m=+212.258614345" Feb 24 09:11:29 crc kubenswrapper[4822]: I0224 09:11:29.940632 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:29 crc kubenswrapper[4822]: E0224 09:11:29.940931 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:30.440904416 +0000 UTC m=+212.828666964 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.041348 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:30 crc kubenswrapper[4822]: E0224 09:11:30.041490 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:30.541464285 +0000 UTC m=+212.929226833 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.041540 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:30 crc kubenswrapper[4822]: E0224 09:11:30.041848 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:30.541841335 +0000 UTC m=+212.929603883 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.142947 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:30 crc kubenswrapper[4822]: E0224 09:11:30.143137 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:30.643108455 +0000 UTC m=+213.030871003 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.143193 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:30 crc kubenswrapper[4822]: E0224 09:11:30.143511 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:30.643497937 +0000 UTC m=+213.031260485 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.244329 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:30 crc kubenswrapper[4822]: E0224 09:11:30.244510 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:30.744484678 +0000 UTC m=+213.132247226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.244675 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:30 crc kubenswrapper[4822]: E0224 09:11:30.244997 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:30.744984834 +0000 UTC m=+213.132747382 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.257277 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-5lckc" Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.261264 4822 patch_prober.go:28] interesting pod/router-default-5444994796-5lckc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 09:11:30 crc kubenswrapper[4822]: [-]has-synced failed: reason withheld Feb 24 09:11:30 crc kubenswrapper[4822]: [+]process-running ok Feb 24 09:11:30 crc kubenswrapper[4822]: healthz check failed Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.261319 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5lckc" podUID="0d142bb1-7f07-498b-a6d6-d378bb619c22" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.267698 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-vhssf" Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.267743 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-vhssf" Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.268576 4822 patch_prober.go:28] interesting pod/apiserver-76f77b778f-vhssf container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.268628 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-vhssf" podUID="e1dc4f3f-07b4-40e4-b324-3bce1b98b132" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.345074 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:30 crc kubenswrapper[4822]: E0224 09:11:30.345172 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:30.84514936 +0000 UTC m=+213.232911908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.345661 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:30 crc kubenswrapper[4822]: E0224 09:11:30.345964 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:30.845950535 +0000 UTC m=+213.233713083 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.446634 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:30 crc kubenswrapper[4822]: E0224 09:11:30.446842 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:30.946816902 +0000 UTC m=+213.334579450 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.447007 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:30 crc kubenswrapper[4822]: E0224 09:11:30.447322 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:30.947309327 +0000 UTC m=+213.335071875 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.500234 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mv4t2" event={"ID":"6503db63-f800-40ec-bf51-12601462d7c7","Type":"ContainerStarted","Data":"1967dc4ce17f507bef88ee0135cd361389a9d1db53ed1339be1e83505e45a6c1"} Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.503619 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-bczvj" event={"ID":"9d1a783f-2b4b-4a8a-b2d3-4e204b2f99c7","Type":"ContainerStarted","Data":"a72679a06e95ade94cd3024503b7cf2ec6bb920414e10a31a1b3fc84c35ea4fd"} Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.505214 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-5nmd4" event={"ID":"3e4a2d28-bb5d-47fa-9d53-e2f32307cc3d","Type":"ContainerStarted","Data":"833bf398663d5c25c4ae3c8c39a8be93d884b40d53030bc4b9c9fcfb14df15ed"} Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.506871 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-c9nn5" event={"ID":"96265d5d-2168-484b-9a0d-b0813a2defa1","Type":"ContainerStarted","Data":"55c2de834ce485bc6730d50adb8d5e2ef0dae7647f701c7c4fb0d58157b3ce89"} Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.507016 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-c9nn5" Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.507990 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-54fq8" event={"ID":"b2e26472-d7fa-4416-8b72-c41558ca9986","Type":"ContainerStarted","Data":"3cbc380f4df2d30407b0057a375b56d018940fff329e4e7248576d83f5defe9e"} Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.509368 4822 generic.go:334] "Generic (PLEG): container finished" podID="77e89f04-a4eb-4d42-a2df-3e4b266e7f85" containerID="590261244c385c4f15c37781380a2a8c2726dfc12037280b48f1d7ecf65c13e3" exitCode=0 Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.509490 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5tb67" event={"ID":"77e89f04-a4eb-4d42-a2df-3e4b266e7f85","Type":"ContainerStarted","Data":"68dd740d31bfb1538f72ba58549fa63e08c8c465d27afbdc2239819e3d8cde95"} Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.509524 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5tb67" event={"ID":"77e89f04-a4eb-4d42-a2df-3e4b266e7f85","Type":"ContainerDied","Data":"590261244c385c4f15c37781380a2a8c2726dfc12037280b48f1d7ecf65c13e3"} Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.510587 4822 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-jr8gq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/healthz\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.510674 4822 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-jr8gq" podUID="d5a6bd57-4bb7-45b4-8451-27e28ee580a5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.17:8080/healthz\": dial tcp 10.217.0.17:8080: connect: connection refused" Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.519376 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjhzd" Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.519873 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cdns9" Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.548141 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:30 crc kubenswrapper[4822]: E0224 09:11:30.548238 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:31.048222346 +0000 UTC m=+213.435984894 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.548430 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:30 crc kubenswrapper[4822]: E0224 09:11:30.548689 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:31.04868303 +0000 UTC m=+213.436445578 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.586269 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-bczvj" podStartSLOduration=167.586254709 podStartE2EDuration="2m47.586254709s" podCreationTimestamp="2026-02-24 09:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:30.576780429 +0000 UTC m=+212.964542977" watchObservedRunningTime="2026-02-24 09:11:30.586254709 +0000 UTC m=+212.974017257" Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.650288 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:30 crc kubenswrapper[4822]: E0224 09:11:30.650482 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:31.150265578 +0000 UTC m=+213.538028126 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.650766 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:30 crc kubenswrapper[4822]: E0224 09:11:30.663241 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:31.163225331 +0000 UTC m=+213.550987879 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.701174 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58794: no serving certificate available for the kubelet" Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.733636 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5tb67" podStartSLOduration=167.73361784 podStartE2EDuration="2m47.73361784s" podCreationTimestamp="2026-02-24 09:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:30.694784523 +0000 UTC m=+213.082547071" watchObservedRunningTime="2026-02-24 09:11:30.73361784 +0000 UTC m=+213.121380388" Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.752603 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:30 crc kubenswrapper[4822]: E0224 09:11:30.752884 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:31.252871128 +0000 UTC m=+213.640633676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.790274 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-5nmd4" podStartSLOduration=167.790259332 podStartE2EDuration="2m47.790259332s" podCreationTimestamp="2026-02-24 09:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:30.736065832 +0000 UTC m=+213.123828380" watchObservedRunningTime="2026-02-24 09:11:30.790259332 +0000 UTC m=+213.178021880" Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.855956 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:30 crc kubenswrapper[4822]: E0224 09:11:30.856541 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:31.356529299 +0000 UTC m=+213.744291847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.957528 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:30 crc kubenswrapper[4822]: E0224 09:11:30.957720 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:31.457691575 +0000 UTC m=+213.845454123 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:30 crc kubenswrapper[4822]: I0224 09:11:30.957762 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:30 crc kubenswrapper[4822]: E0224 09:11:30.958194 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:31.45818779 +0000 UTC m=+213.845950338 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.002080 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5nf7" Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.058596 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.058667 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-c9nn5" podStartSLOduration=8.058651996 podStartE2EDuration="8.058651996s" podCreationTimestamp="2026-02-24 09:11:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:30.864264897 +0000 UTC m=+213.252027445" watchObservedRunningTime="2026-02-24 09:11:31.058651996 +0000 UTC m=+213.446414544" Feb 24 09:11:31 crc kubenswrapper[4822]: E0224 09:11:31.058777 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:31.558749099 +0000 UTC m=+213.946511647 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.058832 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:31 crc kubenswrapper[4822]: E0224 09:11:31.059146 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:31.55913893 +0000 UTC m=+213.946901478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.159712 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:31 crc kubenswrapper[4822]: E0224 09:11:31.159923 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:31.659883715 +0000 UTC m=+214.047646263 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.159986 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:31 crc kubenswrapper[4822]: E0224 09:11:31.160258 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:31.660246605 +0000 UTC m=+214.048009153 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.260819 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:31 crc kubenswrapper[4822]: E0224 09:11:31.261040 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:31.76100965 +0000 UTC m=+214.148772198 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.261209 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:31 crc kubenswrapper[4822]: E0224 09:11:31.261562 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:31.761554346 +0000 UTC m=+214.149316894 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.263540 4822 patch_prober.go:28] interesting pod/router-default-5444994796-5lckc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 09:11:31 crc kubenswrapper[4822]: [-]has-synced failed: reason withheld Feb 24 09:11:31 crc kubenswrapper[4822]: [+]process-running ok Feb 24 09:11:31 crc kubenswrapper[4822]: healthz check failed Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.263607 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5lckc" podUID="0d142bb1-7f07-498b-a6d6-d378bb619c22" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.365591 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:31 crc kubenswrapper[4822]: E0224 09:11:31.365877 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:31.865830906 +0000 UTC m=+214.253593454 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.365943 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:31 crc kubenswrapper[4822]: E0224 09:11:31.366267 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:31.866252608 +0000 UTC m=+214.254015156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.467361 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:31 crc kubenswrapper[4822]: E0224 09:11:31.467566 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:31.967536548 +0000 UTC m=+214.355299096 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.467736 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:31 crc kubenswrapper[4822]: E0224 09:11:31.468122 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:31.968114655 +0000 UTC m=+214.355877203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.484828 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-fbzp9" Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.516778 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-54fq8" event={"ID":"b2e26472-d7fa-4416-8b72-c41558ca9986","Type":"ContainerStarted","Data":"730af875c48491c7ef828b24a53923199aa0a2318e57b5667cd3ca938189aa06"} Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.516824 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-54fq8" event={"ID":"b2e26472-d7fa-4416-8b72-c41558ca9986","Type":"ContainerStarted","Data":"8f0d773e89784b590c7b394b3aa38b77b69378865a9c13b36ccc57f51ae6f42c"} Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.518270 4822 generic.go:334] "Generic (PLEG): container finished" podID="9a50eba2-73f2-4dcb-83a6-a1375a07be13" containerID="81b38047910613cbe7fe491f561bb0c1ab4d809c262dd448832ee4401c4b34c9" exitCode=0 Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.518407 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532060-nsnkd" event={"ID":"9a50eba2-73f2-4dcb-83a6-a1375a07be13","Type":"ContainerDied","Data":"81b38047910613cbe7fe491f561bb0c1ab4d809c262dd448832ee4401c4b34c9"} Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.547199 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5tb67" Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.547249 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5tb67" Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.568552 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:31 crc kubenswrapper[4822]: E0224 09:11:31.568658 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:32.068641593 +0000 UTC m=+214.456404141 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.568839 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:31 crc kubenswrapper[4822]: E0224 09:11:31.569124 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:32.069117807 +0000 UTC m=+214.456880355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.569937 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rw92h"] Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.570169 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-rw92h" podUID="02782b47-e00a-4585-9f89-4fe9585931e5" containerName="controller-manager" containerID="cri-o://c6d5504ab934f335db77241bc3c2e8cfd23e7a2a0849598d36cd9cc36642a7cd" gracePeriod=30 Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.578548 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-rw92h" Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.603196 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2m726"] Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.603381 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2m726" podUID="d79f3f30-0efc-4f81-86a8-8a348431af9e" containerName="route-controller-manager" containerID="cri-o://439186f46ce6aac4e187fd2db5f87105832225f04daf31799d00dcc844fa38ba" gracePeriod=30 Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.672811 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:31 crc kubenswrapper[4822]: E0224 09:11:31.675091 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:32.175066615 +0000 UTC m=+214.562829163 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.726617 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-csmqv" Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.762079 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fsxsb"] Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.762943 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsxsb" Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.764581 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.776202 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fsxsb"] Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.777263 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:31 crc kubenswrapper[4822]: E0224 09:11:31.777504 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:32.27749412 +0000 UTC m=+214.665256658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.878097 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:31 crc kubenswrapper[4822]: E0224 09:11:31.878416 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:32.378389938 +0000 UTC m=+214.766152476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.878552 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlvlv\" (UniqueName: \"kubernetes.io/projected/4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9-kube-api-access-qlvlv\") pod \"certified-operators-fsxsb\" (UID: \"4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9\") " pod="openshift-marketplace/certified-operators-fsxsb" Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.878635 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.878655 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9-catalog-content\") pod \"certified-operators-fsxsb\" (UID: \"4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9\") " pod="openshift-marketplace/certified-operators-fsxsb" Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.878674 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9-utilities\") pod \"certified-operators-fsxsb\" (UID: \"4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9\") " pod="openshift-marketplace/certified-operators-fsxsb" Feb 24 09:11:31 crc kubenswrapper[4822]: E0224 09:11:31.878976 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:32.378964775 +0000 UTC m=+214.766727323 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.907871 4822 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.909819 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5tb67" Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.941842 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7wplj"] Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.942724 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wplj" Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.944786 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.954955 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7wplj"] Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.979961 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.980153 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9-catalog-content\") pod \"certified-operators-fsxsb\" (UID: \"4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9\") " pod="openshift-marketplace/certified-operators-fsxsb" Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.980180 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9-utilities\") pod \"certified-operators-fsxsb\" (UID: \"4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9\") " pod="openshift-marketplace/certified-operators-fsxsb" Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.980216 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlvlv\" (UniqueName: \"kubernetes.io/projected/4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9-kube-api-access-qlvlv\") pod \"certified-operators-fsxsb\" (UID: \"4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9\") " pod="openshift-marketplace/certified-operators-fsxsb" Feb 24 09:11:31 crc kubenswrapper[4822]: E0224 09:11:31.980537 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:32.480522273 +0000 UTC m=+214.868284821 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.980882 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9-catalog-content\") pod \"certified-operators-fsxsb\" (UID: \"4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9\") " pod="openshift-marketplace/certified-operators-fsxsb" Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.981177 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9-utilities\") pod \"certified-operators-fsxsb\" (UID: \"4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9\") " pod="openshift-marketplace/certified-operators-fsxsb" Feb 24 09:11:31 crc kubenswrapper[4822]: I0224 09:11:31.999823 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlvlv\" (UniqueName: \"kubernetes.io/projected/4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9-kube-api-access-qlvlv\") pod \"certified-operators-fsxsb\" (UID: \"4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9\") " pod="openshift-marketplace/certified-operators-fsxsb" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.077074 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsxsb" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.106336 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtqsv\" (UniqueName: \"kubernetes.io/projected/f5e41e0d-dd96-43df-94f6-f004923b10a3-kube-api-access-gtqsv\") pod \"community-operators-7wplj\" (UID: \"f5e41e0d-dd96-43df-94f6-f004923b10a3\") " pod="openshift-marketplace/community-operators-7wplj" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.106479 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5e41e0d-dd96-43df-94f6-f004923b10a3-utilities\") pod \"community-operators-7wplj\" (UID: \"f5e41e0d-dd96-43df-94f6-f004923b10a3\") " pod="openshift-marketplace/community-operators-7wplj" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.106555 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.106669 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5e41e0d-dd96-43df-94f6-f004923b10a3-catalog-content\") pod \"community-operators-7wplj\" (UID: \"f5e41e0d-dd96-43df-94f6-f004923b10a3\") " pod="openshift-marketplace/community-operators-7wplj" Feb 24 09:11:32 crc kubenswrapper[4822]: E0224 09:11:32.107210 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:32.607191223 +0000 UTC m=+214.994953771 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.128704 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z68xq"] Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.129601 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z68xq" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.158482 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z68xq"] Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.209378 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:32 crc kubenswrapper[4822]: E0224 09:11:32.209657 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:32.709630598 +0000 UTC m=+215.097393146 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.209757 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5e41e0d-dd96-43df-94f6-f004923b10a3-catalog-content\") pod \"community-operators-7wplj\" (UID: \"f5e41e0d-dd96-43df-94f6-f004923b10a3\") " pod="openshift-marketplace/community-operators-7wplj" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.209823 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtqsv\" (UniqueName: \"kubernetes.io/projected/f5e41e0d-dd96-43df-94f6-f004923b10a3-kube-api-access-gtqsv\") pod \"community-operators-7wplj\" (UID: \"f5e41e0d-dd96-43df-94f6-f004923b10a3\") " pod="openshift-marketplace/community-operators-7wplj" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.209882 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5e41e0d-dd96-43df-94f6-f004923b10a3-utilities\") pod \"community-operators-7wplj\" (UID: \"f5e41e0d-dd96-43df-94f6-f004923b10a3\") " pod="openshift-marketplace/community-operators-7wplj" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.210613 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5e41e0d-dd96-43df-94f6-f004923b10a3-utilities\") pod \"community-operators-7wplj\" (UID: \"f5e41e0d-dd96-43df-94f6-f004923b10a3\") " pod="openshift-marketplace/community-operators-7wplj" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.210845 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5e41e0d-dd96-43df-94f6-f004923b10a3-catalog-content\") pod \"community-operators-7wplj\" (UID: \"f5e41e0d-dd96-43df-94f6-f004923b10a3\") " pod="openshift-marketplace/community-operators-7wplj" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.231026 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtqsv\" (UniqueName: \"kubernetes.io/projected/f5e41e0d-dd96-43df-94f6-f004923b10a3-kube-api-access-gtqsv\") pod \"community-operators-7wplj\" (UID: \"f5e41e0d-dd96-43df-94f6-f004923b10a3\") " pod="openshift-marketplace/community-operators-7wplj" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.255362 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wplj" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.259952 4822 patch_prober.go:28] interesting pod/router-default-5444994796-5lckc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 09:11:32 crc kubenswrapper[4822]: [-]has-synced failed: reason withheld Feb 24 09:11:32 crc kubenswrapper[4822]: [+]process-running ok Feb 24 09:11:32 crc kubenswrapper[4822]: healthz check failed Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.259991 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5lckc" podUID="0d142bb1-7f07-498b-a6d6-d378bb619c22" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.310985 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzzm7\" (UniqueName: \"kubernetes.io/projected/46112603-12a1-4bde-8442-c9675eb2c5f0-kube-api-access-xzzm7\") pod \"certified-operators-z68xq\" (UID: \"46112603-12a1-4bde-8442-c9675eb2c5f0\") " pod="openshift-marketplace/certified-operators-z68xq" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.311033 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.311051 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46112603-12a1-4bde-8442-c9675eb2c5f0-catalog-content\") pod \"certified-operators-z68xq\" (UID: \"46112603-12a1-4bde-8442-c9675eb2c5f0\") " pod="openshift-marketplace/certified-operators-z68xq" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.311079 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46112603-12a1-4bde-8442-c9675eb2c5f0-utilities\") pod \"certified-operators-z68xq\" (UID: \"46112603-12a1-4bde-8442-c9675eb2c5f0\") " pod="openshift-marketplace/certified-operators-z68xq" Feb 24 09:11:32 crc kubenswrapper[4822]: E0224 09:11:32.311369 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:32.811357701 +0000 UTC m=+215.199120249 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.318228 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-56bwc"] Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.319136 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-56bwc" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.328784 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-56bwc"] Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.412385 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.412519 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46112603-12a1-4bde-8442-c9675eb2c5f0-utilities\") pod \"certified-operators-z68xq\" (UID: \"46112603-12a1-4bde-8442-c9675eb2c5f0\") " pod="openshift-marketplace/certified-operators-z68xq" Feb 24 09:11:32 crc kubenswrapper[4822]: E0224 09:11:32.412559 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:32.912532718 +0000 UTC m=+215.300295266 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.412586 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a5b1d8c-f894-4c8a-b82b-052aa58260e2-catalog-content\") pod \"community-operators-56bwc\" (UID: \"2a5b1d8c-f894-4c8a-b82b-052aa58260e2\") " pod="openshift-marketplace/community-operators-56bwc" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.412614 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzzm7\" (UniqueName: \"kubernetes.io/projected/46112603-12a1-4bde-8442-c9675eb2c5f0-kube-api-access-xzzm7\") pod \"certified-operators-z68xq\" (UID: \"46112603-12a1-4bde-8442-c9675eb2c5f0\") " pod="openshift-marketplace/certified-operators-z68xq" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.412643 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-475z7\" (UniqueName: \"kubernetes.io/projected/2a5b1d8c-f894-4c8a-b82b-052aa58260e2-kube-api-access-475z7\") pod \"community-operators-56bwc\" (UID: \"2a5b1d8c-f894-4c8a-b82b-052aa58260e2\") " pod="openshift-marketplace/community-operators-56bwc" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.412665 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a5b1d8c-f894-4c8a-b82b-052aa58260e2-utilities\") pod \"community-operators-56bwc\" (UID: \"2a5b1d8c-f894-4c8a-b82b-052aa58260e2\") " pod="openshift-marketplace/community-operators-56bwc" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.412687 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.412711 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46112603-12a1-4bde-8442-c9675eb2c5f0-catalog-content\") pod \"certified-operators-z68xq\" (UID: \"46112603-12a1-4bde-8442-c9675eb2c5f0\") " pod="openshift-marketplace/certified-operators-z68xq" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.413043 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46112603-12a1-4bde-8442-c9675eb2c5f0-utilities\") pod \"certified-operators-z68xq\" (UID: \"46112603-12a1-4bde-8442-c9675eb2c5f0\") " pod="openshift-marketplace/certified-operators-z68xq" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.413114 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46112603-12a1-4bde-8442-c9675eb2c5f0-catalog-content\") pod \"certified-operators-z68xq\" (UID: \"46112603-12a1-4bde-8442-c9675eb2c5f0\") " pod="openshift-marketplace/certified-operators-z68xq" Feb 24 09:11:32 crc kubenswrapper[4822]: E0224 09:11:32.413254 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:32.913217849 +0000 UTC m=+215.300980387 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.426672 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rw92h" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.432090 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzzm7\" (UniqueName: \"kubernetes.io/projected/46112603-12a1-4bde-8442-c9675eb2c5f0-kube-api-access-xzzm7\") pod \"certified-operators-z68xq\" (UID: \"46112603-12a1-4bde-8442-c9675eb2c5f0\") " pod="openshift-marketplace/certified-operators-z68xq" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.445692 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z68xq" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.477630 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fsxsb"] Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.514844 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5tz7\" (UniqueName: \"kubernetes.io/projected/02782b47-e00a-4585-9f89-4fe9585931e5-kube-api-access-f5tz7\") pod \"02782b47-e00a-4585-9f89-4fe9585931e5\" (UID: \"02782b47-e00a-4585-9f89-4fe9585931e5\") " Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.514930 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02782b47-e00a-4585-9f89-4fe9585931e5-config\") pod \"02782b47-e00a-4585-9f89-4fe9585931e5\" (UID: \"02782b47-e00a-4585-9f89-4fe9585931e5\") " Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.514958 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02782b47-e00a-4585-9f89-4fe9585931e5-client-ca\") pod \"02782b47-e00a-4585-9f89-4fe9585931e5\" (UID: \"02782b47-e00a-4585-9f89-4fe9585931e5\") " Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.514980 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02782b47-e00a-4585-9f89-4fe9585931e5-serving-cert\") pod \"02782b47-e00a-4585-9f89-4fe9585931e5\" (UID: \"02782b47-e00a-4585-9f89-4fe9585931e5\") " Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.514994 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02782b47-e00a-4585-9f89-4fe9585931e5-proxy-ca-bundles\") pod \"02782b47-e00a-4585-9f89-4fe9585931e5\" (UID: \"02782b47-e00a-4585-9f89-4fe9585931e5\") " Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.516082 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02782b47-e00a-4585-9f89-4fe9585931e5-client-ca" (OuterVolumeSpecName: "client-ca") pod "02782b47-e00a-4585-9f89-4fe9585931e5" (UID: "02782b47-e00a-4585-9f89-4fe9585931e5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.516552 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02782b47-e00a-4585-9f89-4fe9585931e5-config" (OuterVolumeSpecName: "config") pod "02782b47-e00a-4585-9f89-4fe9585931e5" (UID: "02782b47-e00a-4585-9f89-4fe9585931e5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.516878 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:32 crc kubenswrapper[4822]: E0224 09:11:32.517261 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:33.017232529 +0000 UTC m=+215.404995087 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.517451 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02782b47-e00a-4585-9f89-4fe9585931e5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "02782b47-e00a-4585-9f89-4fe9585931e5" (UID: "02782b47-e00a-4585-9f89-4fe9585931e5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.522547 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a5b1d8c-f894-4c8a-b82b-052aa58260e2-catalog-content\") pod \"community-operators-56bwc\" (UID: \"2a5b1d8c-f894-4c8a-b82b-052aa58260e2\") " pod="openshift-marketplace/community-operators-56bwc" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.522604 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-475z7\" (UniqueName: \"kubernetes.io/projected/2a5b1d8c-f894-4c8a-b82b-052aa58260e2-kube-api-access-475z7\") pod \"community-operators-56bwc\" (UID: \"2a5b1d8c-f894-4c8a-b82b-052aa58260e2\") " pod="openshift-marketplace/community-operators-56bwc" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.522642 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a5b1d8c-f894-4c8a-b82b-052aa58260e2-utilities\") pod \"community-operators-56bwc\" (UID: \"2a5b1d8c-f894-4c8a-b82b-052aa58260e2\") " pod="openshift-marketplace/community-operators-56bwc" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.522667 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.522760 4822 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02782b47-e00a-4585-9f89-4fe9585931e5-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.522777 4822 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02782b47-e00a-4585-9f89-4fe9585931e5-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.522789 4822 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02782b47-e00a-4585-9f89-4fe9585931e5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.523671 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a5b1d8c-f894-4c8a-b82b-052aa58260e2-utilities\") pod \"community-operators-56bwc\" (UID: \"2a5b1d8c-f894-4c8a-b82b-052aa58260e2\") " pod="openshift-marketplace/community-operators-56bwc" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.523975 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a5b1d8c-f894-4c8a-b82b-052aa58260e2-catalog-content\") pod \"community-operators-56bwc\" (UID: \"2a5b1d8c-f894-4c8a-b82b-052aa58260e2\") " pod="openshift-marketplace/community-operators-56bwc" Feb 24 09:11:32 crc kubenswrapper[4822]: E0224 09:11:32.527178 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:33.027158842 +0000 UTC m=+215.414921390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcmh" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.529305 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02782b47-e00a-4585-9f89-4fe9585931e5-kube-api-access-f5tz7" (OuterVolumeSpecName: "kube-api-access-f5tz7") pod "02782b47-e00a-4585-9f89-4fe9585931e5" (UID: "02782b47-e00a-4585-9f89-4fe9585931e5"). InnerVolumeSpecName "kube-api-access-f5tz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.531210 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsxsb" event={"ID":"4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9","Type":"ContainerStarted","Data":"7e7d8bd3888ab5e4f40f5b568902a2ec44cf3eabdaa499d5a5a2559468d667df"} Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.535683 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02782b47-e00a-4585-9f89-4fe9585931e5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "02782b47-e00a-4585-9f89-4fe9585931e5" (UID: "02782b47-e00a-4585-9f89-4fe9585931e5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.537093 4822 generic.go:334] "Generic (PLEG): container finished" podID="d79f3f30-0efc-4f81-86a8-8a348431af9e" containerID="439186f46ce6aac4e187fd2db5f87105832225f04daf31799d00dcc844fa38ba" exitCode=0 Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.537175 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2m726" event={"ID":"d79f3f30-0efc-4f81-86a8-8a348431af9e","Type":"ContainerDied","Data":"439186f46ce6aac4e187fd2db5f87105832225f04daf31799d00dcc844fa38ba"} Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.538278 4822 generic.go:334] "Generic (PLEG): container finished" podID="02782b47-e00a-4585-9f89-4fe9585931e5" containerID="c6d5504ab934f335db77241bc3c2e8cfd23e7a2a0849598d36cd9cc36642a7cd" exitCode=0 Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.538317 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rw92h" event={"ID":"02782b47-e00a-4585-9f89-4fe9585931e5","Type":"ContainerDied","Data":"c6d5504ab934f335db77241bc3c2e8cfd23e7a2a0849598d36cd9cc36642a7cd"} Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.538331 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rw92h" event={"ID":"02782b47-e00a-4585-9f89-4fe9585931e5","Type":"ContainerDied","Data":"23a16f2cec2658ecfcaff5c383ca20d73b91acd241b2df7cac83b19626182531"} Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.538346 4822 scope.go:117] "RemoveContainer" containerID="c6d5504ab934f335db77241bc3c2e8cfd23e7a2a0849598d36cd9cc36642a7cd" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.538455 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rw92h" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.542978 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-475z7\" (UniqueName: \"kubernetes.io/projected/2a5b1d8c-f894-4c8a-b82b-052aa58260e2-kube-api-access-475z7\") pod \"community-operators-56bwc\" (UID: \"2a5b1d8c-f894-4c8a-b82b-052aa58260e2\") " pod="openshift-marketplace/community-operators-56bwc" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.551000 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-54fq8" event={"ID":"b2e26472-d7fa-4416-8b72-c41558ca9986","Type":"ContainerStarted","Data":"2393cd991fc98203767683345a6da598fb89f161964f76d9a9d32de947f84a31"} Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.555745 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2m726" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.558827 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7wplj"] Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.560690 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5tb67" Feb 24 09:11:32 crc kubenswrapper[4822]: W0224 09:11:32.571306 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5e41e0d_dd96_43df_94f6_f004923b10a3.slice/crio-07a5aa72101e9671bfdf4283fca9d725ae7c1abc4f8d28bc1bb258f9ea37ecea WatchSource:0}: Error finding container 07a5aa72101e9671bfdf4283fca9d725ae7c1abc4f8d28bc1bb258f9ea37ecea: Status 404 returned error can't find the container with id 07a5aa72101e9671bfdf4283fca9d725ae7c1abc4f8d28bc1bb258f9ea37ecea Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.577279 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-54fq8" podStartSLOduration=9.577264981 podStartE2EDuration="9.577264981s" podCreationTimestamp="2026-02-24 09:11:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:32.575813769 +0000 UTC m=+214.963576317" watchObservedRunningTime="2026-02-24 09:11:32.577264981 +0000 UTC m=+214.965027529" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.583083 4822 scope.go:117] "RemoveContainer" containerID="c6d5504ab934f335db77241bc3c2e8cfd23e7a2a0849598d36cd9cc36642a7cd" Feb 24 09:11:32 crc kubenswrapper[4822]: E0224 09:11:32.585306 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6d5504ab934f335db77241bc3c2e8cfd23e7a2a0849598d36cd9cc36642a7cd\": container with ID starting with c6d5504ab934f335db77241bc3c2e8cfd23e7a2a0849598d36cd9cc36642a7cd not found: ID does not exist" containerID="c6d5504ab934f335db77241bc3c2e8cfd23e7a2a0849598d36cd9cc36642a7cd" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.585350 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6d5504ab934f335db77241bc3c2e8cfd23e7a2a0849598d36cd9cc36642a7cd"} err="failed to get container status \"c6d5504ab934f335db77241bc3c2e8cfd23e7a2a0849598d36cd9cc36642a7cd\": rpc error: code = NotFound desc = could not find container \"c6d5504ab934f335db77241bc3c2e8cfd23e7a2a0849598d36cd9cc36642a7cd\": container with ID starting with c6d5504ab934f335db77241bc3c2e8cfd23e7a2a0849598d36cd9cc36642a7cd not found: ID does not exist" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.599727 4822 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-24T09:11:31.907900949Z","Handler":null,"Name":""} Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.609341 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rw92h"] Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.612539 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rw92h"] Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.613855 4822 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.613892 4822 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.623500 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.623788 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5tz7\" (UniqueName: \"kubernetes.io/projected/02782b47-e00a-4585-9f89-4fe9585931e5-kube-api-access-f5tz7\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.623801 4822 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02782b47-e00a-4585-9f89-4fe9585931e5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.651188 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-56bwc" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.699582 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.727041 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vf249\" (UniqueName: \"kubernetes.io/projected/d79f3f30-0efc-4f81-86a8-8a348431af9e-kube-api-access-vf249\") pod \"d79f3f30-0efc-4f81-86a8-8a348431af9e\" (UID: \"d79f3f30-0efc-4f81-86a8-8a348431af9e\") " Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.727087 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d79f3f30-0efc-4f81-86a8-8a348431af9e-config\") pod \"d79f3f30-0efc-4f81-86a8-8a348431af9e\" (UID: \"d79f3f30-0efc-4f81-86a8-8a348431af9e\") " Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.727117 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d79f3f30-0efc-4f81-86a8-8a348431af9e-serving-cert\") pod \"d79f3f30-0efc-4f81-86a8-8a348431af9e\" (UID: \"d79f3f30-0efc-4f81-86a8-8a348431af9e\") " Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.727172 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d79f3f30-0efc-4f81-86a8-8a348431af9e-client-ca\") pod \"d79f3f30-0efc-4f81-86a8-8a348431af9e\" (UID: \"d79f3f30-0efc-4f81-86a8-8a348431af9e\") " Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.727430 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.729900 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d79f3f30-0efc-4f81-86a8-8a348431af9e-config" (OuterVolumeSpecName: "config") pod "d79f3f30-0efc-4f81-86a8-8a348431af9e" (UID: "d79f3f30-0efc-4f81-86a8-8a348431af9e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.731404 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d79f3f30-0efc-4f81-86a8-8a348431af9e-client-ca" (OuterVolumeSpecName: "client-ca") pod "d79f3f30-0efc-4f81-86a8-8a348431af9e" (UID: "d79f3f30-0efc-4f81-86a8-8a348431af9e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.736989 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d79f3f30-0efc-4f81-86a8-8a348431af9e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d79f3f30-0efc-4f81-86a8-8a348431af9e" (UID: "d79f3f30-0efc-4f81-86a8-8a348431af9e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.739966 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d79f3f30-0efc-4f81-86a8-8a348431af9e-kube-api-access-vf249" (OuterVolumeSpecName: "kube-api-access-vf249") pod "d79f3f30-0efc-4f81-86a8-8a348431af9e" (UID: "d79f3f30-0efc-4f81-86a8-8a348431af9e"). InnerVolumeSpecName "kube-api-access-vf249". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.742581 4822 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.742614 4822 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.764160 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z68xq"] Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.810740 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcmh\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.828804 4822 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d79f3f30-0efc-4f81-86a8-8a348431af9e-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.828834 4822 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d79f3f30-0efc-4f81-86a8-8a348431af9e-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.828845 4822 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d79f3f30-0efc-4f81-86a8-8a348431af9e-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.828866 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vf249\" (UniqueName: \"kubernetes.io/projected/d79f3f30-0efc-4f81-86a8-8a348431af9e-kube-api-access-vf249\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.897045 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532060-nsnkd" Feb 24 09:11:32 crc kubenswrapper[4822]: I0224 09:11:32.898360 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-56bwc"] Feb 24 09:11:32 crc kubenswrapper[4822]: W0224 09:11:32.915498 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a5b1d8c_f894_4c8a_b82b_052aa58260e2.slice/crio-c05a6e9277c2505db61fcb3a3ed7f61ea6fbd534bc87e30ed23295af9b358855 WatchSource:0}: Error finding container c05a6e9277c2505db61fcb3a3ed7f61ea6fbd534bc87e30ed23295af9b358855: Status 404 returned error can't find the container with id c05a6e9277c2505db61fcb3a3ed7f61ea6fbd534bc87e30ed23295af9b358855 Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.032972 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xttrk\" (UniqueName: \"kubernetes.io/projected/9a50eba2-73f2-4dcb-83a6-a1375a07be13-kube-api-access-xttrk\") pod \"9a50eba2-73f2-4dcb-83a6-a1375a07be13\" (UID: \"9a50eba2-73f2-4dcb-83a6-a1375a07be13\") " Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.033145 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9a50eba2-73f2-4dcb-83a6-a1375a07be13-secret-volume\") pod \"9a50eba2-73f2-4dcb-83a6-a1375a07be13\" (UID: \"9a50eba2-73f2-4dcb-83a6-a1375a07be13\") " Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.033320 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a50eba2-73f2-4dcb-83a6-a1375a07be13-config-volume\") pod \"9a50eba2-73f2-4dcb-83a6-a1375a07be13\" (UID: \"9a50eba2-73f2-4dcb-83a6-a1375a07be13\") " Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.033830 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a50eba2-73f2-4dcb-83a6-a1375a07be13-config-volume" (OuterVolumeSpecName: "config-volume") pod "9a50eba2-73f2-4dcb-83a6-a1375a07be13" (UID: "9a50eba2-73f2-4dcb-83a6-a1375a07be13"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.039695 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a50eba2-73f2-4dcb-83a6-a1375a07be13-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9a50eba2-73f2-4dcb-83a6-a1375a07be13" (UID: "9a50eba2-73f2-4dcb-83a6-a1375a07be13"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.041533 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a50eba2-73f2-4dcb-83a6-a1375a07be13-kube-api-access-xttrk" (OuterVolumeSpecName: "kube-api-access-xttrk") pod "9a50eba2-73f2-4dcb-83a6-a1375a07be13" (UID: "9a50eba2-73f2-4dcb-83a6-a1375a07be13"). InnerVolumeSpecName "kube-api-access-xttrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.063490 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.119082 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7f56dfcfc7-88jvt"] Feb 24 09:11:33 crc kubenswrapper[4822]: E0224 09:11:33.119266 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a50eba2-73f2-4dcb-83a6-a1375a07be13" containerName="collect-profiles" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.119282 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a50eba2-73f2-4dcb-83a6-a1375a07be13" containerName="collect-profiles" Feb 24 09:11:33 crc kubenswrapper[4822]: E0224 09:11:33.119300 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02782b47-e00a-4585-9f89-4fe9585931e5" containerName="controller-manager" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.119308 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="02782b47-e00a-4585-9f89-4fe9585931e5" containerName="controller-manager" Feb 24 09:11:33 crc kubenswrapper[4822]: E0224 09:11:33.119318 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d79f3f30-0efc-4f81-86a8-8a348431af9e" containerName="route-controller-manager" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.119324 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="d79f3f30-0efc-4f81-86a8-8a348431af9e" containerName="route-controller-manager" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.119399 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a50eba2-73f2-4dcb-83a6-a1375a07be13" containerName="collect-profiles" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.119411 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="d79f3f30-0efc-4f81-86a8-8a348431af9e" containerName="route-controller-manager" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.119420 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="02782b47-e00a-4585-9f89-4fe9585931e5" containerName="controller-manager" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.119751 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f56dfcfc7-88jvt" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.121491 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.122922 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.122958 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.123250 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.123387 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.123421 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.125137 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb8d577b9-9pfd4"] Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.126250 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fb8d577b9-9pfd4" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.131452 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f56dfcfc7-88jvt"] Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.132976 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.136293 4822 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a50eba2-73f2-4dcb-83a6-a1375a07be13-config-volume\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.136323 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xttrk\" (UniqueName: \"kubernetes.io/projected/9a50eba2-73f2-4dcb-83a6-a1375a07be13-kube-api-access-xttrk\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.136336 4822 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9a50eba2-73f2-4dcb-83a6-a1375a07be13-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.140870 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb8d577b9-9pfd4"] Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.238153 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11c12733-bf69-45df-93fa-a4e2faeeed06-config\") pod \"controller-manager-7f56dfcfc7-88jvt\" (UID: \"11c12733-bf69-45df-93fa-a4e2faeeed06\") " pod="openshift-controller-manager/controller-manager-7f56dfcfc7-88jvt" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.238638 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11c12733-bf69-45df-93fa-a4e2faeeed06-client-ca\") pod \"controller-manager-7f56dfcfc7-88jvt\" (UID: \"11c12733-bf69-45df-93fa-a4e2faeeed06\") " pod="openshift-controller-manager/controller-manager-7f56dfcfc7-88jvt" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.238668 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/610804c2-4d1f-474c-a92c-563810e293dd-client-ca\") pod \"route-controller-manager-6fb8d577b9-9pfd4\" (UID: \"610804c2-4d1f-474c-a92c-563810e293dd\") " pod="openshift-route-controller-manager/route-controller-manager-6fb8d577b9-9pfd4" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.238705 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2cvs\" (UniqueName: \"kubernetes.io/projected/11c12733-bf69-45df-93fa-a4e2faeeed06-kube-api-access-h2cvs\") pod \"controller-manager-7f56dfcfc7-88jvt\" (UID: \"11c12733-bf69-45df-93fa-a4e2faeeed06\") " pod="openshift-controller-manager/controller-manager-7f56dfcfc7-88jvt" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.238782 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmp9k\" (UniqueName: \"kubernetes.io/projected/610804c2-4d1f-474c-a92c-563810e293dd-kube-api-access-xmp9k\") pod \"route-controller-manager-6fb8d577b9-9pfd4\" (UID: \"610804c2-4d1f-474c-a92c-563810e293dd\") " pod="openshift-route-controller-manager/route-controller-manager-6fb8d577b9-9pfd4" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.238953 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/610804c2-4d1f-474c-a92c-563810e293dd-config\") pod \"route-controller-manager-6fb8d577b9-9pfd4\" (UID: \"610804c2-4d1f-474c-a92c-563810e293dd\") " pod="openshift-route-controller-manager/route-controller-manager-6fb8d577b9-9pfd4" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.238994 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11c12733-bf69-45df-93fa-a4e2faeeed06-serving-cert\") pod \"controller-manager-7f56dfcfc7-88jvt\" (UID: \"11c12733-bf69-45df-93fa-a4e2faeeed06\") " pod="openshift-controller-manager/controller-manager-7f56dfcfc7-88jvt" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.239035 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/11c12733-bf69-45df-93fa-a4e2faeeed06-proxy-ca-bundles\") pod \"controller-manager-7f56dfcfc7-88jvt\" (UID: \"11c12733-bf69-45df-93fa-a4e2faeeed06\") " pod="openshift-controller-manager/controller-manager-7f56dfcfc7-88jvt" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.239089 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/610804c2-4d1f-474c-a92c-563810e293dd-serving-cert\") pod \"route-controller-manager-6fb8d577b9-9pfd4\" (UID: \"610804c2-4d1f-474c-a92c-563810e293dd\") " pod="openshift-route-controller-manager/route-controller-manager-6fb8d577b9-9pfd4" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.261635 4822 patch_prober.go:28] interesting pod/router-default-5444994796-5lckc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 09:11:33 crc kubenswrapper[4822]: [-]has-synced failed: reason withheld Feb 24 09:11:33 crc kubenswrapper[4822]: [+]process-running ok Feb 24 09:11:33 crc kubenswrapper[4822]: healthz check failed Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.261731 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5lckc" podUID="0d142bb1-7f07-498b-a6d6-d378bb619c22" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.297960 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58280: no serving certificate available for the kubelet" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.317011 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cjcmh"] Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.339639 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmp9k\" (UniqueName: \"kubernetes.io/projected/610804c2-4d1f-474c-a92c-563810e293dd-kube-api-access-xmp9k\") pod \"route-controller-manager-6fb8d577b9-9pfd4\" (UID: \"610804c2-4d1f-474c-a92c-563810e293dd\") " pod="openshift-route-controller-manager/route-controller-manager-6fb8d577b9-9pfd4" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.339709 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/610804c2-4d1f-474c-a92c-563810e293dd-config\") pod \"route-controller-manager-6fb8d577b9-9pfd4\" (UID: \"610804c2-4d1f-474c-a92c-563810e293dd\") " pod="openshift-route-controller-manager/route-controller-manager-6fb8d577b9-9pfd4" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.339739 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11c12733-bf69-45df-93fa-a4e2faeeed06-serving-cert\") pod \"controller-manager-7f56dfcfc7-88jvt\" (UID: \"11c12733-bf69-45df-93fa-a4e2faeeed06\") " pod="openshift-controller-manager/controller-manager-7f56dfcfc7-88jvt" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.339760 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/11c12733-bf69-45df-93fa-a4e2faeeed06-proxy-ca-bundles\") pod \"controller-manager-7f56dfcfc7-88jvt\" (UID: \"11c12733-bf69-45df-93fa-a4e2faeeed06\") " pod="openshift-controller-manager/controller-manager-7f56dfcfc7-88jvt" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.339790 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/610804c2-4d1f-474c-a92c-563810e293dd-serving-cert\") pod \"route-controller-manager-6fb8d577b9-9pfd4\" (UID: \"610804c2-4d1f-474c-a92c-563810e293dd\") " pod="openshift-route-controller-manager/route-controller-manager-6fb8d577b9-9pfd4" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.339810 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11c12733-bf69-45df-93fa-a4e2faeeed06-config\") pod \"controller-manager-7f56dfcfc7-88jvt\" (UID: \"11c12733-bf69-45df-93fa-a4e2faeeed06\") " pod="openshift-controller-manager/controller-manager-7f56dfcfc7-88jvt" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.339837 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11c12733-bf69-45df-93fa-a4e2faeeed06-client-ca\") pod \"controller-manager-7f56dfcfc7-88jvt\" (UID: \"11c12733-bf69-45df-93fa-a4e2faeeed06\") " pod="openshift-controller-manager/controller-manager-7f56dfcfc7-88jvt" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.339857 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/610804c2-4d1f-474c-a92c-563810e293dd-client-ca\") pod \"route-controller-manager-6fb8d577b9-9pfd4\" (UID: \"610804c2-4d1f-474c-a92c-563810e293dd\") " pod="openshift-route-controller-manager/route-controller-manager-6fb8d577b9-9pfd4" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.339880 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2cvs\" (UniqueName: \"kubernetes.io/projected/11c12733-bf69-45df-93fa-a4e2faeeed06-kube-api-access-h2cvs\") pod \"controller-manager-7f56dfcfc7-88jvt\" (UID: \"11c12733-bf69-45df-93fa-a4e2faeeed06\") " pod="openshift-controller-manager/controller-manager-7f56dfcfc7-88jvt" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.340833 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11c12733-bf69-45df-93fa-a4e2faeeed06-client-ca\") pod \"controller-manager-7f56dfcfc7-88jvt\" (UID: \"11c12733-bf69-45df-93fa-a4e2faeeed06\") " pod="openshift-controller-manager/controller-manager-7f56dfcfc7-88jvt" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.341152 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/610804c2-4d1f-474c-a92c-563810e293dd-client-ca\") pod \"route-controller-manager-6fb8d577b9-9pfd4\" (UID: \"610804c2-4d1f-474c-a92c-563810e293dd\") " pod="openshift-route-controller-manager/route-controller-manager-6fb8d577b9-9pfd4" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.341556 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/11c12733-bf69-45df-93fa-a4e2faeeed06-proxy-ca-bundles\") pod \"controller-manager-7f56dfcfc7-88jvt\" (UID: \"11c12733-bf69-45df-93fa-a4e2faeeed06\") " pod="openshift-controller-manager/controller-manager-7f56dfcfc7-88jvt" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.342559 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11c12733-bf69-45df-93fa-a4e2faeeed06-config\") pod \"controller-manager-7f56dfcfc7-88jvt\" (UID: \"11c12733-bf69-45df-93fa-a4e2faeeed06\") " pod="openshift-controller-manager/controller-manager-7f56dfcfc7-88jvt" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.343130 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/610804c2-4d1f-474c-a92c-563810e293dd-config\") pod \"route-controller-manager-6fb8d577b9-9pfd4\" (UID: \"610804c2-4d1f-474c-a92c-563810e293dd\") " pod="openshift-route-controller-manager/route-controller-manager-6fb8d577b9-9pfd4" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.347004 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11c12733-bf69-45df-93fa-a4e2faeeed06-serving-cert\") pod \"controller-manager-7f56dfcfc7-88jvt\" (UID: \"11c12733-bf69-45df-93fa-a4e2faeeed06\") " pod="openshift-controller-manager/controller-manager-7f56dfcfc7-88jvt" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.347544 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/610804c2-4d1f-474c-a92c-563810e293dd-serving-cert\") pod \"route-controller-manager-6fb8d577b9-9pfd4\" (UID: \"610804c2-4d1f-474c-a92c-563810e293dd\") " pod="openshift-route-controller-manager/route-controller-manager-6fb8d577b9-9pfd4" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.357566 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmp9k\" (UniqueName: \"kubernetes.io/projected/610804c2-4d1f-474c-a92c-563810e293dd-kube-api-access-xmp9k\") pod \"route-controller-manager-6fb8d577b9-9pfd4\" (UID: \"610804c2-4d1f-474c-a92c-563810e293dd\") " pod="openshift-route-controller-manager/route-controller-manager-6fb8d577b9-9pfd4" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.359964 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2cvs\" (UniqueName: \"kubernetes.io/projected/11c12733-bf69-45df-93fa-a4e2faeeed06-kube-api-access-h2cvs\") pod \"controller-manager-7f56dfcfc7-88jvt\" (UID: \"11c12733-bf69-45df-93fa-a4e2faeeed06\") " pod="openshift-controller-manager/controller-manager-7f56dfcfc7-88jvt" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.501359 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f56dfcfc7-88jvt" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.508528 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fb8d577b9-9pfd4" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.557090 4822 generic.go:334] "Generic (PLEG): container finished" podID="46112603-12a1-4bde-8442-c9675eb2c5f0" containerID="89e67f5d5e9edab40de6e1ef0e52e0e6ef47ed061b0a21899f47145674d4a4f7" exitCode=0 Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.557142 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z68xq" event={"ID":"46112603-12a1-4bde-8442-c9675eb2c5f0","Type":"ContainerDied","Data":"89e67f5d5e9edab40de6e1ef0e52e0e6ef47ed061b0a21899f47145674d4a4f7"} Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.557163 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z68xq" event={"ID":"46112603-12a1-4bde-8442-c9675eb2c5f0","Type":"ContainerStarted","Data":"50c7039a94fb9bdc94700fe12b7d4bd7a446910f38720fc4fb8a88bbc4aa9ad2"} Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.558946 4822 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.568967 4822 generic.go:334] "Generic (PLEG): container finished" podID="2a5b1d8c-f894-4c8a-b82b-052aa58260e2" containerID="a4533d913baec61364af948c6afe4c73be157f097d1d214b4a525a3d1ef2fc9b" exitCode=0 Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.569107 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-56bwc" event={"ID":"2a5b1d8c-f894-4c8a-b82b-052aa58260e2","Type":"ContainerDied","Data":"a4533d913baec61364af948c6afe4c73be157f097d1d214b4a525a3d1ef2fc9b"} Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.569190 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-56bwc" event={"ID":"2a5b1d8c-f894-4c8a-b82b-052aa58260e2","Type":"ContainerStarted","Data":"c05a6e9277c2505db61fcb3a3ed7f61ea6fbd534bc87e30ed23295af9b358855"} Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.579715 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" event={"ID":"f9ca89b3-e69d-4443-9e13-10ec52c688e5","Type":"ContainerStarted","Data":"a21f7c3c19d8509708b1d781b56e753a83da9fcbaa9b03b0237e3d30fdc1995a"} Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.579764 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" event={"ID":"f9ca89b3-e69d-4443-9e13-10ec52c688e5","Type":"ContainerStarted","Data":"1d121c48b64d459621bc78491870cb2cbbb4bf1c016b042748ba0939b49dd07d"} Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.580424 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.588985 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532060-nsnkd" event={"ID":"9a50eba2-73f2-4dcb-83a6-a1375a07be13","Type":"ContainerDied","Data":"dc71f583bac1eb90cdeb6daa92910aa5c5c6d2206020105013b13713e7985281"} Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.589029 4822 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc71f583bac1eb90cdeb6daa92910aa5c5c6d2206020105013b13713e7985281" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.589000 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532060-nsnkd" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.613661 4822 generic.go:334] "Generic (PLEG): container finished" podID="4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9" containerID="ca2a7165bc38d19f21296b61dfc07b7af3e8058f7ef6cbcb9ca73f5a954d2629" exitCode=0 Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.613748 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsxsb" event={"ID":"4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9","Type":"ContainerDied","Data":"ca2a7165bc38d19f21296b61dfc07b7af3e8058f7ef6cbcb9ca73f5a954d2629"} Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.628017 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2m726" event={"ID":"d79f3f30-0efc-4f81-86a8-8a348431af9e","Type":"ContainerDied","Data":"43ba737836198fd6c345e10fa199eb9ba947e1954c96c0a9ee47fef7c9f683b4"} Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.628060 4822 scope.go:117] "RemoveContainer" containerID="439186f46ce6aac4e187fd2db5f87105832225f04daf31799d00dcc844fa38ba" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.628069 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2m726" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.643064 4822 generic.go:334] "Generic (PLEG): container finished" podID="f5e41e0d-dd96-43df-94f6-f004923b10a3" containerID="a8cf2767edf31b1971ac045ecdbb0eb7da8b1cc81256f80de639d4d80d367e7e" exitCode=0 Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.643165 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wplj" event={"ID":"f5e41e0d-dd96-43df-94f6-f004923b10a3","Type":"ContainerDied","Data":"a8cf2767edf31b1971ac045ecdbb0eb7da8b1cc81256f80de639d4d80d367e7e"} Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.643295 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wplj" event={"ID":"f5e41e0d-dd96-43df-94f6-f004923b10a3","Type":"ContainerStarted","Data":"07a5aa72101e9671bfdf4283fca9d725ae7c1abc4f8d28bc1bb258f9ea37ecea"} Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.668114 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" podStartSLOduration=170.668097277 podStartE2EDuration="2m50.668097277s" podCreationTimestamp="2026-02-24 09:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:33.6478585 +0000 UTC m=+216.035621058" watchObservedRunningTime="2026-02-24 09:11:33.668097277 +0000 UTC m=+216.055859825" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.688617 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2m726"] Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.708849 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2m726"] Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.822613 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f56dfcfc7-88jvt"] Feb 24 09:11:33 crc kubenswrapper[4822]: W0224 09:11:33.832244 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11c12733_bf69_45df_93fa_a4e2faeeed06.slice/crio-e7819ce9eb5d047225044e6189bfbb0434ddd6ff5999d47852953c6328e7b30c WatchSource:0}: Error finding container e7819ce9eb5d047225044e6189bfbb0434ddd6ff5999d47852953c6328e7b30c: Status 404 returned error can't find the container with id e7819ce9eb5d047225044e6189bfbb0434ddd6ff5999d47852953c6328e7b30c Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.879432 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb8d577b9-9pfd4"] Feb 24 09:11:33 crc kubenswrapper[4822]: W0224 09:11:33.882686 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod610804c2_4d1f_474c_a92c_563810e293dd.slice/crio-dacbc2ef540769543e6872d47cd311f17cce778bc32a73362ff7e8c05f7e8934 WatchSource:0}: Error finding container dacbc2ef540769543e6872d47cd311f17cce778bc32a73362ff7e8c05f7e8934: Status 404 returned error can't find the container with id dacbc2ef540769543e6872d47cd311f17cce778bc32a73362ff7e8c05f7e8934 Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.912584 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jm48n"] Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.913485 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jm48n" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.915529 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 24 09:11:33 crc kubenswrapper[4822]: I0224 09:11:33.943443 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jm48n"] Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.066464 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b90902ec-35f8-4f8e-8d81-b813f439629c-utilities\") pod \"redhat-marketplace-jm48n\" (UID: \"b90902ec-35f8-4f8e-8d81-b813f439629c\") " pod="openshift-marketplace/redhat-marketplace-jm48n" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.066565 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b90902ec-35f8-4f8e-8d81-b813f439629c-catalog-content\") pod \"redhat-marketplace-jm48n\" (UID: \"b90902ec-35f8-4f8e-8d81-b813f439629c\") " pod="openshift-marketplace/redhat-marketplace-jm48n" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.066769 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khgqc\" (UniqueName: \"kubernetes.io/projected/b90902ec-35f8-4f8e-8d81-b813f439629c-kube-api-access-khgqc\") pod \"redhat-marketplace-jm48n\" (UID: \"b90902ec-35f8-4f8e-8d81-b813f439629c\") " pod="openshift-marketplace/redhat-marketplace-jm48n" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.167668 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b90902ec-35f8-4f8e-8d81-b813f439629c-utilities\") pod \"redhat-marketplace-jm48n\" (UID: \"b90902ec-35f8-4f8e-8d81-b813f439629c\") " pod="openshift-marketplace/redhat-marketplace-jm48n" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.167736 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b90902ec-35f8-4f8e-8d81-b813f439629c-catalog-content\") pod \"redhat-marketplace-jm48n\" (UID: \"b90902ec-35f8-4f8e-8d81-b813f439629c\") " pod="openshift-marketplace/redhat-marketplace-jm48n" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.167822 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khgqc\" (UniqueName: \"kubernetes.io/projected/b90902ec-35f8-4f8e-8d81-b813f439629c-kube-api-access-khgqc\") pod \"redhat-marketplace-jm48n\" (UID: \"b90902ec-35f8-4f8e-8d81-b813f439629c\") " pod="openshift-marketplace/redhat-marketplace-jm48n" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.168303 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b90902ec-35f8-4f8e-8d81-b813f439629c-utilities\") pod \"redhat-marketplace-jm48n\" (UID: \"b90902ec-35f8-4f8e-8d81-b813f439629c\") " pod="openshift-marketplace/redhat-marketplace-jm48n" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.168338 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b90902ec-35f8-4f8e-8d81-b813f439629c-catalog-content\") pod \"redhat-marketplace-jm48n\" (UID: \"b90902ec-35f8-4f8e-8d81-b813f439629c\") " pod="openshift-marketplace/redhat-marketplace-jm48n" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.192439 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khgqc\" (UniqueName: \"kubernetes.io/projected/b90902ec-35f8-4f8e-8d81-b813f439629c-kube-api-access-khgqc\") pod \"redhat-marketplace-jm48n\" (UID: \"b90902ec-35f8-4f8e-8d81-b813f439629c\") " pod="openshift-marketplace/redhat-marketplace-jm48n" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.230789 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jm48n" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.262973 4822 patch_prober.go:28] interesting pod/router-default-5444994796-5lckc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 09:11:34 crc kubenswrapper[4822]: [-]has-synced failed: reason withheld Feb 24 09:11:34 crc kubenswrapper[4822]: [+]process-running ok Feb 24 09:11:34 crc kubenswrapper[4822]: healthz check failed Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.264369 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5lckc" podUID="0d142bb1-7f07-498b-a6d6-d378bb619c22" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.289450 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.290085 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.296366 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.296668 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.299860 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.335498 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lcvsl"] Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.336433 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lcvsl" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.352610 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02782b47-e00a-4585-9f89-4fe9585931e5" path="/var/lib/kubelet/pods/02782b47-e00a-4585-9f89-4fe9585931e5/volumes" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.355446 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.356338 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d79f3f30-0efc-4f81-86a8-8a348431af9e" path="/var/lib/kubelet/pods/d79f3f30-0efc-4f81-86a8-8a348431af9e/volumes" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.356997 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lcvsl"] Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.485888 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9448daa-c156-4dbe-8ddb-4153f7e83aeb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b9448daa-c156-4dbe-8ddb-4153f7e83aeb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.486232 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9dm8\" (UniqueName: \"kubernetes.io/projected/02875d0b-b186-4481-bf06-923e1d91f53f-kube-api-access-w9dm8\") pod \"redhat-marketplace-lcvsl\" (UID: \"02875d0b-b186-4481-bf06-923e1d91f53f\") " pod="openshift-marketplace/redhat-marketplace-lcvsl" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.486278 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b9448daa-c156-4dbe-8ddb-4153f7e83aeb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b9448daa-c156-4dbe-8ddb-4153f7e83aeb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.486312 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02875d0b-b186-4481-bf06-923e1d91f53f-utilities\") pod \"redhat-marketplace-lcvsl\" (UID: \"02875d0b-b186-4481-bf06-923e1d91f53f\") " pod="openshift-marketplace/redhat-marketplace-lcvsl" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.486334 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02875d0b-b186-4481-bf06-923e1d91f53f-catalog-content\") pod \"redhat-marketplace-lcvsl\" (UID: \"02875d0b-b186-4481-bf06-923e1d91f53f\") " pod="openshift-marketplace/redhat-marketplace-lcvsl" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.542059 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jm48n"] Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.587608 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9dm8\" (UniqueName: \"kubernetes.io/projected/02875d0b-b186-4481-bf06-923e1d91f53f-kube-api-access-w9dm8\") pod \"redhat-marketplace-lcvsl\" (UID: \"02875d0b-b186-4481-bf06-923e1d91f53f\") " pod="openshift-marketplace/redhat-marketplace-lcvsl" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.587675 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b9448daa-c156-4dbe-8ddb-4153f7e83aeb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b9448daa-c156-4dbe-8ddb-4153f7e83aeb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.587712 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02875d0b-b186-4481-bf06-923e1d91f53f-utilities\") pod \"redhat-marketplace-lcvsl\" (UID: \"02875d0b-b186-4481-bf06-923e1d91f53f\") " pod="openshift-marketplace/redhat-marketplace-lcvsl" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.587736 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02875d0b-b186-4481-bf06-923e1d91f53f-catalog-content\") pod \"redhat-marketplace-lcvsl\" (UID: \"02875d0b-b186-4481-bf06-923e1d91f53f\") " pod="openshift-marketplace/redhat-marketplace-lcvsl" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.587762 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9448daa-c156-4dbe-8ddb-4153f7e83aeb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b9448daa-c156-4dbe-8ddb-4153f7e83aeb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.588537 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02875d0b-b186-4481-bf06-923e1d91f53f-utilities\") pod \"redhat-marketplace-lcvsl\" (UID: \"02875d0b-b186-4481-bf06-923e1d91f53f\") " pod="openshift-marketplace/redhat-marketplace-lcvsl" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.589170 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02875d0b-b186-4481-bf06-923e1d91f53f-catalog-content\") pod \"redhat-marketplace-lcvsl\" (UID: \"02875d0b-b186-4481-bf06-923e1d91f53f\") " pod="openshift-marketplace/redhat-marketplace-lcvsl" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.590015 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b9448daa-c156-4dbe-8ddb-4153f7e83aeb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b9448daa-c156-4dbe-8ddb-4153f7e83aeb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.619233 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9448daa-c156-4dbe-8ddb-4153f7e83aeb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b9448daa-c156-4dbe-8ddb-4153f7e83aeb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.619334 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9dm8\" (UniqueName: \"kubernetes.io/projected/02875d0b-b186-4481-bf06-923e1d91f53f-kube-api-access-w9dm8\") pod \"redhat-marketplace-lcvsl\" (UID: \"02875d0b-b186-4481-bf06-923e1d91f53f\") " pod="openshift-marketplace/redhat-marketplace-lcvsl" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.661344 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lcvsl" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.712118 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jm48n" event={"ID":"b90902ec-35f8-4f8e-8d81-b813f439629c","Type":"ContainerStarted","Data":"2df00c05c8d084121d723871c1cea3df6fda115bbf8e315070ce0659c03ac208"} Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.724204 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f56dfcfc7-88jvt" event={"ID":"11c12733-bf69-45df-93fa-a4e2faeeed06","Type":"ContainerStarted","Data":"4df2994bf39fe1f8927ce0a9aaf7a73ef2208d7e8688fd29d3686ee95fc147cd"} Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.724238 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f56dfcfc7-88jvt" event={"ID":"11c12733-bf69-45df-93fa-a4e2faeeed06","Type":"ContainerStarted","Data":"e7819ce9eb5d047225044e6189bfbb0434ddd6ff5999d47852953c6328e7b30c"} Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.725318 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7f56dfcfc7-88jvt" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.745723 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7f56dfcfc7-88jvt" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.746450 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7f56dfcfc7-88jvt" podStartSLOduration=3.746441554 podStartE2EDuration="3.746441554s" podCreationTimestamp="2026-02-24 09:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:34.745479346 +0000 UTC m=+217.133241894" watchObservedRunningTime="2026-02-24 09:11:34.746441554 +0000 UTC m=+217.134204102" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.773610 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fb8d577b9-9pfd4" event={"ID":"610804c2-4d1f-474c-a92c-563810e293dd","Type":"ContainerStarted","Data":"70a28d7722d4ad236765469d6adf2b52ab625841f15f8c19e0867e937e06ba62"} Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.773654 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fb8d577b9-9pfd4" event={"ID":"610804c2-4d1f-474c-a92c-563810e293dd","Type":"ContainerStarted","Data":"dacbc2ef540769543e6872d47cd311f17cce778bc32a73362ff7e8c05f7e8934"} Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.774206 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6fb8d577b9-9pfd4" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.789148 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6fb8d577b9-9pfd4" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.899737 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6fb8d577b9-9pfd4" podStartSLOduration=3.89972259 podStartE2EDuration="3.89972259s" podCreationTimestamp="2026-02-24 09:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:34.858084201 +0000 UTC m=+217.245846749" watchObservedRunningTime="2026-02-24 09:11:34.89972259 +0000 UTC m=+217.287485138" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.918996 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.971268 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xhmth"] Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.978934 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xhmth" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.984480 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.984706 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.986217 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.988175 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 24 09:11:34 crc kubenswrapper[4822]: I0224 09:11:34.988362 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.000436 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xhmth"] Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.018609 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.105222 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70973b60-6421-4c72-b5ba-b5ad78d060e7-utilities\") pod \"redhat-operators-xhmth\" (UID: \"70973b60-6421-4c72-b5ba-b5ad78d060e7\") " pod="openshift-marketplace/redhat-operators-xhmth" Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.105257 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73ec63f4-10a6-4388-8bdf-ef28375b82e4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"73ec63f4-10a6-4388-8bdf-ef28375b82e4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.105277 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73ec63f4-10a6-4388-8bdf-ef28375b82e4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"73ec63f4-10a6-4388-8bdf-ef28375b82e4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.105296 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70973b60-6421-4c72-b5ba-b5ad78d060e7-catalog-content\") pod \"redhat-operators-xhmth\" (UID: \"70973b60-6421-4c72-b5ba-b5ad78d060e7\") " pod="openshift-marketplace/redhat-operators-xhmth" Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.105358 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzg5v\" (UniqueName: \"kubernetes.io/projected/70973b60-6421-4c72-b5ba-b5ad78d060e7-kube-api-access-mzg5v\") pod \"redhat-operators-xhmth\" (UID: \"70973b60-6421-4c72-b5ba-b5ad78d060e7\") " pod="openshift-marketplace/redhat-operators-xhmth" Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.206727 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70973b60-6421-4c72-b5ba-b5ad78d060e7-utilities\") pod \"redhat-operators-xhmth\" (UID: \"70973b60-6421-4c72-b5ba-b5ad78d060e7\") " pod="openshift-marketplace/redhat-operators-xhmth" Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.207042 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73ec63f4-10a6-4388-8bdf-ef28375b82e4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"73ec63f4-10a6-4388-8bdf-ef28375b82e4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.207059 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73ec63f4-10a6-4388-8bdf-ef28375b82e4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"73ec63f4-10a6-4388-8bdf-ef28375b82e4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.207076 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70973b60-6421-4c72-b5ba-b5ad78d060e7-catalog-content\") pod \"redhat-operators-xhmth\" (UID: \"70973b60-6421-4c72-b5ba-b5ad78d060e7\") " pod="openshift-marketplace/redhat-operators-xhmth" Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.207115 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzg5v\" (UniqueName: \"kubernetes.io/projected/70973b60-6421-4c72-b5ba-b5ad78d060e7-kube-api-access-mzg5v\") pod \"redhat-operators-xhmth\" (UID: \"70973b60-6421-4c72-b5ba-b5ad78d060e7\") " pod="openshift-marketplace/redhat-operators-xhmth" Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.207562 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73ec63f4-10a6-4388-8bdf-ef28375b82e4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"73ec63f4-10a6-4388-8bdf-ef28375b82e4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.207607 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70973b60-6421-4c72-b5ba-b5ad78d060e7-catalog-content\") pod \"redhat-operators-xhmth\" (UID: \"70973b60-6421-4c72-b5ba-b5ad78d060e7\") " pod="openshift-marketplace/redhat-operators-xhmth" Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.207608 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70973b60-6421-4c72-b5ba-b5ad78d060e7-utilities\") pod \"redhat-operators-xhmth\" (UID: \"70973b60-6421-4c72-b5ba-b5ad78d060e7\") " pod="openshift-marketplace/redhat-operators-xhmth" Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.227076 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73ec63f4-10a6-4388-8bdf-ef28375b82e4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"73ec63f4-10a6-4388-8bdf-ef28375b82e4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.245664 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzg5v\" (UniqueName: \"kubernetes.io/projected/70973b60-6421-4c72-b5ba-b5ad78d060e7-kube-api-access-mzg5v\") pod \"redhat-operators-xhmth\" (UID: \"70973b60-6421-4c72-b5ba-b5ad78d060e7\") " pod="openshift-marketplace/redhat-operators-xhmth" Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.262196 4822 patch_prober.go:28] interesting pod/router-default-5444994796-5lckc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 09:11:35 crc kubenswrapper[4822]: [-]has-synced failed: reason withheld Feb 24 09:11:35 crc kubenswrapper[4822]: [+]process-running ok Feb 24 09:11:35 crc kubenswrapper[4822]: healthz check failed Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.262246 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5lckc" podUID="0d142bb1-7f07-498b-a6d6-d378bb619c22" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.293595 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-vhssf" Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.298375 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-vhssf" Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.324209 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tmzzx"] Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.325181 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tmzzx" Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.347389 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tmzzx"] Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.354183 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xhmth" Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.415204 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.415222 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ldh2\" (UniqueName: \"kubernetes.io/projected/bb158e82-c2df-449e-a2ad-a73731f5965b-kube-api-access-2ldh2\") pod \"redhat-operators-tmzzx\" (UID: \"bb158e82-c2df-449e-a2ad-a73731f5965b\") " pod="openshift-marketplace/redhat-operators-tmzzx" Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.415281 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb158e82-c2df-449e-a2ad-a73731f5965b-utilities\") pod \"redhat-operators-tmzzx\" (UID: \"bb158e82-c2df-449e-a2ad-a73731f5965b\") " pod="openshift-marketplace/redhat-operators-tmzzx" Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.415333 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb158e82-c2df-449e-a2ad-a73731f5965b-catalog-content\") pod \"redhat-operators-tmzzx\" (UID: \"bb158e82-c2df-449e-a2ad-a73731f5965b\") " pod="openshift-marketplace/redhat-operators-tmzzx" Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.429478 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lcvsl"] Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.446207 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.516816 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ldh2\" (UniqueName: \"kubernetes.io/projected/bb158e82-c2df-449e-a2ad-a73731f5965b-kube-api-access-2ldh2\") pod \"redhat-operators-tmzzx\" (UID: \"bb158e82-c2df-449e-a2ad-a73731f5965b\") " pod="openshift-marketplace/redhat-operators-tmzzx" Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.516877 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb158e82-c2df-449e-a2ad-a73731f5965b-utilities\") pod \"redhat-operators-tmzzx\" (UID: \"bb158e82-c2df-449e-a2ad-a73731f5965b\") " pod="openshift-marketplace/redhat-operators-tmzzx" Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.516958 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb158e82-c2df-449e-a2ad-a73731f5965b-catalog-content\") pod \"redhat-operators-tmzzx\" (UID: \"bb158e82-c2df-449e-a2ad-a73731f5965b\") " pod="openshift-marketplace/redhat-operators-tmzzx" Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.517420 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb158e82-c2df-449e-a2ad-a73731f5965b-catalog-content\") pod \"redhat-operators-tmzzx\" (UID: \"bb158e82-c2df-449e-a2ad-a73731f5965b\") " pod="openshift-marketplace/redhat-operators-tmzzx" Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.517698 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb158e82-c2df-449e-a2ad-a73731f5965b-utilities\") pod \"redhat-operators-tmzzx\" (UID: \"bb158e82-c2df-449e-a2ad-a73731f5965b\") " pod="openshift-marketplace/redhat-operators-tmzzx" Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.538605 4822 patch_prober.go:28] interesting pod/downloads-7954f5f757-z895m container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.538656 4822 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-z895m" podUID="8cd11009-44d6-4539-b702-958f388fc85e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.538613 4822 patch_prober.go:28] interesting pod/downloads-7954f5f757-z895m container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.538858 4822 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-z895m" podUID="8cd11009-44d6-4539-b702-958f388fc85e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.560985 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ldh2\" (UniqueName: \"kubernetes.io/projected/bb158e82-c2df-449e-a2ad-a73731f5965b-kube-api-access-2ldh2\") pod \"redhat-operators-tmzzx\" (UID: \"bb158e82-c2df-449e-a2ad-a73731f5965b\") " pod="openshift-marketplace/redhat-operators-tmzzx" Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.673592 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tmzzx" Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.754564 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xhmth"] Feb 24 09:11:35 crc kubenswrapper[4822]: W0224 09:11:35.788065 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70973b60_6421_4c72_b5ba_b5ad78d060e7.slice/crio-0d29a7d1df3799b8749667020819697f078fb67bfed92f865a2f930ede8d15b8 WatchSource:0}: Error finding container 0d29a7d1df3799b8749667020819697f078fb67bfed92f865a2f930ede8d15b8: Status 404 returned error can't find the container with id 0d29a7d1df3799b8749667020819697f078fb67bfed92f865a2f930ede8d15b8 Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.810765 4822 generic.go:334] "Generic (PLEG): container finished" podID="b90902ec-35f8-4f8e-8d81-b813f439629c" containerID="8ef9ac588423f97e63504702d2a2af228ecab06b86d8fee50b183bf145c10ed0" exitCode=0 Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.811298 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jm48n" event={"ID":"b90902ec-35f8-4f8e-8d81-b813f439629c","Type":"ContainerDied","Data":"8ef9ac588423f97e63504702d2a2af228ecab06b86d8fee50b183bf145c10ed0"} Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.820320 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b9448daa-c156-4dbe-8ddb-4153f7e83aeb","Type":"ContainerStarted","Data":"a285d9d01535765977871f3121cb020bdcac79ac4653307cc66f24359b6e533f"} Feb 24 09:11:35 crc kubenswrapper[4822]: I0224 09:11:35.838177 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcvsl" event={"ID":"02875d0b-b186-4481-bf06-923e1d91f53f","Type":"ContainerStarted","Data":"2ecc4343792786b583eb63364d49560f10e29842939b7795ea89bd4740100712"} Feb 24 09:11:36 crc kubenswrapper[4822]: I0224 09:11:36.130122 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-6shfw" Feb 24 09:11:36 crc kubenswrapper[4822]: I0224 09:11:36.130405 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-6shfw" Feb 24 09:11:36 crc kubenswrapper[4822]: I0224 09:11:36.144666 4822 patch_prober.go:28] interesting pod/console-f9d7485db-6shfw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Feb 24 09:11:36 crc kubenswrapper[4822]: I0224 09:11:36.144719 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-6shfw" podUID="996cd1b1-5fb9-448d-bcb6-933ee6c0a31a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Feb 24 09:11:36 crc kubenswrapper[4822]: I0224 09:11:36.187123 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 24 09:11:36 crc kubenswrapper[4822]: I0224 09:11:36.193719 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tmzzx"] Feb 24 09:11:36 crc kubenswrapper[4822]: I0224 09:11:36.220441 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-jr8gq" Feb 24 09:11:36 crc kubenswrapper[4822]: I0224 09:11:36.257010 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-5lckc" Feb 24 09:11:36 crc kubenswrapper[4822]: I0224 09:11:36.262100 4822 patch_prober.go:28] interesting pod/router-default-5444994796-5lckc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 09:11:36 crc kubenswrapper[4822]: [-]has-synced failed: reason withheld Feb 24 09:11:36 crc kubenswrapper[4822]: [+]process-running ok Feb 24 09:11:36 crc kubenswrapper[4822]: healthz check failed Feb 24 09:11:36 crc kubenswrapper[4822]: I0224 09:11:36.262161 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5lckc" podUID="0d142bb1-7f07-498b-a6d6-d378bb619c22" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 09:11:36 crc kubenswrapper[4822]: I0224 09:11:36.843674 4822 generic.go:334] "Generic (PLEG): container finished" podID="02875d0b-b186-4481-bf06-923e1d91f53f" containerID="80fea790ba8ef5802df83d7f33694fd8391dd56406109024313ba18b3a2dfd8c" exitCode=0 Feb 24 09:11:36 crc kubenswrapper[4822]: I0224 09:11:36.843758 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcvsl" event={"ID":"02875d0b-b186-4481-bf06-923e1d91f53f","Type":"ContainerDied","Data":"80fea790ba8ef5802df83d7f33694fd8391dd56406109024313ba18b3a2dfd8c"} Feb 24 09:11:36 crc kubenswrapper[4822]: I0224 09:11:36.847120 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"73ec63f4-10a6-4388-8bdf-ef28375b82e4","Type":"ContainerStarted","Data":"15af35ae200139a7f28f55ceda2db16e03815a31f0acb13f8068fd1925d8a444"} Feb 24 09:11:36 crc kubenswrapper[4822]: I0224 09:11:36.847150 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"73ec63f4-10a6-4388-8bdf-ef28375b82e4","Type":"ContainerStarted","Data":"982d907c839b884e690da0a996e50f772c4364649d3db78c1690d2d5668a1244"} Feb 24 09:11:36 crc kubenswrapper[4822]: I0224 09:11:36.848625 4822 generic.go:334] "Generic (PLEG): container finished" podID="70973b60-6421-4c72-b5ba-b5ad78d060e7" containerID="281c213af7741790e53f6121910b6386460ef9661cc7b2be2f264553f976683c" exitCode=0 Feb 24 09:11:36 crc kubenswrapper[4822]: I0224 09:11:36.848692 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xhmth" event={"ID":"70973b60-6421-4c72-b5ba-b5ad78d060e7","Type":"ContainerDied","Data":"281c213af7741790e53f6121910b6386460ef9661cc7b2be2f264553f976683c"} Feb 24 09:11:36 crc kubenswrapper[4822]: I0224 09:11:36.848714 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xhmth" event={"ID":"70973b60-6421-4c72-b5ba-b5ad78d060e7","Type":"ContainerStarted","Data":"0d29a7d1df3799b8749667020819697f078fb67bfed92f865a2f930ede8d15b8"} Feb 24 09:11:36 crc kubenswrapper[4822]: I0224 09:11:36.851535 4822 generic.go:334] "Generic (PLEG): container finished" podID="b9448daa-c156-4dbe-8ddb-4153f7e83aeb" containerID="ab072e36fd34c5dccadd3ac044e584df385285ae32051ac3f257ebdf649cbb9f" exitCode=0 Feb 24 09:11:36 crc kubenswrapper[4822]: I0224 09:11:36.851655 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b9448daa-c156-4dbe-8ddb-4153f7e83aeb","Type":"ContainerDied","Data":"ab072e36fd34c5dccadd3ac044e584df385285ae32051ac3f257ebdf649cbb9f"} Feb 24 09:11:36 crc kubenswrapper[4822]: I0224 09:11:36.853873 4822 generic.go:334] "Generic (PLEG): container finished" podID="bb158e82-c2df-449e-a2ad-a73731f5965b" containerID="83230ecab1e9147b63c87792b8c2c0caf8f134c81add3ca57a016a0ad229ff65" exitCode=0 Feb 24 09:11:36 crc kubenswrapper[4822]: I0224 09:11:36.854853 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tmzzx" event={"ID":"bb158e82-c2df-449e-a2ad-a73731f5965b","Type":"ContainerDied","Data":"83230ecab1e9147b63c87792b8c2c0caf8f134c81add3ca57a016a0ad229ff65"} Feb 24 09:11:36 crc kubenswrapper[4822]: I0224 09:11:36.854889 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tmzzx" event={"ID":"bb158e82-c2df-449e-a2ad-a73731f5965b","Type":"ContainerStarted","Data":"fc111fb567bcefab45e255fcda4ba2f6f1676d2a6fee29336a5db690c48fadc9"} Feb 24 09:11:37 crc kubenswrapper[4822]: I0224 09:11:37.272393 4822 patch_prober.go:28] interesting pod/router-default-5444994796-5lckc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 09:11:37 crc kubenswrapper[4822]: [-]has-synced failed: reason withheld Feb 24 09:11:37 crc kubenswrapper[4822]: [+]process-running ok Feb 24 09:11:37 crc kubenswrapper[4822]: healthz check failed Feb 24 09:11:37 crc kubenswrapper[4822]: I0224 09:11:37.272453 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5lckc" podUID="0d142bb1-7f07-498b-a6d6-d378bb619c22" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 09:11:37 crc kubenswrapper[4822]: I0224 09:11:37.878757 4822 generic.go:334] "Generic (PLEG): container finished" podID="73ec63f4-10a6-4388-8bdf-ef28375b82e4" containerID="15af35ae200139a7f28f55ceda2db16e03815a31f0acb13f8068fd1925d8a444" exitCode=0 Feb 24 09:11:37 crc kubenswrapper[4822]: I0224 09:11:37.878810 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"73ec63f4-10a6-4388-8bdf-ef28375b82e4","Type":"ContainerDied","Data":"15af35ae200139a7f28f55ceda2db16e03815a31f0acb13f8068fd1925d8a444"} Feb 24 09:11:38 crc kubenswrapper[4822]: I0224 09:11:38.259703 4822 patch_prober.go:28] interesting pod/router-default-5444994796-5lckc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 09:11:38 crc kubenswrapper[4822]: [-]has-synced failed: reason withheld Feb 24 09:11:38 crc kubenswrapper[4822]: [+]process-running ok Feb 24 09:11:38 crc kubenswrapper[4822]: healthz check failed Feb 24 09:11:38 crc kubenswrapper[4822]: I0224 09:11:38.260682 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5lckc" podUID="0d142bb1-7f07-498b-a6d6-d378bb619c22" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 09:11:38 crc kubenswrapper[4822]: I0224 09:11:38.262611 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 09:11:38 crc kubenswrapper[4822]: I0224 09:11:38.302134 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b9448daa-c156-4dbe-8ddb-4153f7e83aeb-kubelet-dir\") pod \"b9448daa-c156-4dbe-8ddb-4153f7e83aeb\" (UID: \"b9448daa-c156-4dbe-8ddb-4153f7e83aeb\") " Feb 24 09:11:38 crc kubenswrapper[4822]: I0224 09:11:38.302208 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9448daa-c156-4dbe-8ddb-4153f7e83aeb-kube-api-access\") pod \"b9448daa-c156-4dbe-8ddb-4153f7e83aeb\" (UID: \"b9448daa-c156-4dbe-8ddb-4153f7e83aeb\") " Feb 24 09:11:38 crc kubenswrapper[4822]: I0224 09:11:38.303724 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9448daa-c156-4dbe-8ddb-4153f7e83aeb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b9448daa-c156-4dbe-8ddb-4153f7e83aeb" (UID: "b9448daa-c156-4dbe-8ddb-4153f7e83aeb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:11:38 crc kubenswrapper[4822]: I0224 09:11:38.314151 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9448daa-c156-4dbe-8ddb-4153f7e83aeb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b9448daa-c156-4dbe-8ddb-4153f7e83aeb" (UID: "b9448daa-c156-4dbe-8ddb-4153f7e83aeb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:38 crc kubenswrapper[4822]: I0224 09:11:38.379594 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-c9nn5" Feb 24 09:11:38 crc kubenswrapper[4822]: I0224 09:11:38.403538 4822 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b9448daa-c156-4dbe-8ddb-4153f7e83aeb-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:38 crc kubenswrapper[4822]: I0224 09:11:38.403571 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9448daa-c156-4dbe-8ddb-4153f7e83aeb-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:38 crc kubenswrapper[4822]: I0224 09:11:38.450435 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58284: no serving certificate available for the kubelet" Feb 24 09:11:38 crc kubenswrapper[4822]: I0224 09:11:38.892343 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 09:11:38 crc kubenswrapper[4822]: I0224 09:11:38.892834 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b9448daa-c156-4dbe-8ddb-4153f7e83aeb","Type":"ContainerDied","Data":"a285d9d01535765977871f3121cb020bdcac79ac4653307cc66f24359b6e533f"} Feb 24 09:11:38 crc kubenswrapper[4822]: I0224 09:11:38.892859 4822 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a285d9d01535765977871f3121cb020bdcac79ac4653307cc66f24359b6e533f" Feb 24 09:11:39 crc kubenswrapper[4822]: I0224 09:11:39.265506 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-5lckc" Feb 24 09:11:39 crc kubenswrapper[4822]: I0224 09:11:39.275000 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-5lckc" Feb 24 09:11:39 crc kubenswrapper[4822]: I0224 09:11:39.388055 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 09:11:39 crc kubenswrapper[4822]: I0224 09:11:39.524475 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73ec63f4-10a6-4388-8bdf-ef28375b82e4-kubelet-dir\") pod \"73ec63f4-10a6-4388-8bdf-ef28375b82e4\" (UID: \"73ec63f4-10a6-4388-8bdf-ef28375b82e4\") " Feb 24 09:11:39 crc kubenswrapper[4822]: I0224 09:11:39.524574 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73ec63f4-10a6-4388-8bdf-ef28375b82e4-kube-api-access\") pod \"73ec63f4-10a6-4388-8bdf-ef28375b82e4\" (UID: \"73ec63f4-10a6-4388-8bdf-ef28375b82e4\") " Feb 24 09:11:39 crc kubenswrapper[4822]: I0224 09:11:39.524883 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73ec63f4-10a6-4388-8bdf-ef28375b82e4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "73ec63f4-10a6-4388-8bdf-ef28375b82e4" (UID: "73ec63f4-10a6-4388-8bdf-ef28375b82e4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:11:39 crc kubenswrapper[4822]: I0224 09:11:39.529875 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73ec63f4-10a6-4388-8bdf-ef28375b82e4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "73ec63f4-10a6-4388-8bdf-ef28375b82e4" (UID: "73ec63f4-10a6-4388-8bdf-ef28375b82e4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:39 crc kubenswrapper[4822]: I0224 09:11:39.625636 4822 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73ec63f4-10a6-4388-8bdf-ef28375b82e4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:39 crc kubenswrapper[4822]: I0224 09:11:39.625667 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73ec63f4-10a6-4388-8bdf-ef28375b82e4-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:39 crc kubenswrapper[4822]: I0224 09:11:39.911815 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"73ec63f4-10a6-4388-8bdf-ef28375b82e4","Type":"ContainerDied","Data":"982d907c839b884e690da0a996e50f772c4364649d3db78c1690d2d5668a1244"} Feb 24 09:11:39 crc kubenswrapper[4822]: I0224 09:11:39.912781 4822 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="982d907c839b884e690da0a996e50f772c4364649d3db78c1690d2d5668a1244" Feb 24 09:11:39 crc kubenswrapper[4822]: I0224 09:11:39.912374 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 09:11:40 crc kubenswrapper[4822]: I0224 09:11:40.925365 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58296: no serving certificate available for the kubelet" Feb 24 09:11:45 crc kubenswrapper[4822]: I0224 09:11:45.540607 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-z895m" Feb 24 09:11:45 crc kubenswrapper[4822]: I0224 09:11:45.676095 4822 patch_prober.go:28] interesting pod/machine-config-daemon-qd752 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:11:45 crc kubenswrapper[4822]: I0224 09:11:45.676157 4822 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:11:45 crc kubenswrapper[4822]: I0224 09:11:45.770802 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:11:46 crc kubenswrapper[4822]: I0224 09:11:46.146679 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-6shfw" Feb 24 09:11:46 crc kubenswrapper[4822]: I0224 09:11:46.150721 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-6shfw" Feb 24 09:11:48 crc kubenswrapper[4822]: I0224 09:11:48.720651 4822 ???:1] "http: TLS handshake error from 192.168.126.11:60052: no serving certificate available for the kubelet" Feb 24 09:11:51 crc kubenswrapper[4822]: I0224 09:11:51.054695 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f56dfcfc7-88jvt"] Feb 24 09:11:51 crc kubenswrapper[4822]: I0224 09:11:51.055154 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7f56dfcfc7-88jvt" podUID="11c12733-bf69-45df-93fa-a4e2faeeed06" containerName="controller-manager" containerID="cri-o://4df2994bf39fe1f8927ce0a9aaf7a73ef2208d7e8688fd29d3686ee95fc147cd" gracePeriod=30 Feb 24 09:11:51 crc kubenswrapper[4822]: I0224 09:11:51.073076 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb8d577b9-9pfd4"] Feb 24 09:11:51 crc kubenswrapper[4822]: I0224 09:11:51.073267 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6fb8d577b9-9pfd4" podUID="610804c2-4d1f-474c-a92c-563810e293dd" containerName="route-controller-manager" containerID="cri-o://70a28d7722d4ad236765469d6adf2b52ab625841f15f8c19e0867e937e06ba62" gracePeriod=30 Feb 24 09:11:51 crc kubenswrapper[4822]: I0224 09:11:51.992200 4822 generic.go:334] "Generic (PLEG): container finished" podID="610804c2-4d1f-474c-a92c-563810e293dd" containerID="70a28d7722d4ad236765469d6adf2b52ab625841f15f8c19e0867e937e06ba62" exitCode=0 Feb 24 09:11:51 crc kubenswrapper[4822]: I0224 09:11:51.992316 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fb8d577b9-9pfd4" event={"ID":"610804c2-4d1f-474c-a92c-563810e293dd","Type":"ContainerDied","Data":"70a28d7722d4ad236765469d6adf2b52ab625841f15f8c19e0867e937e06ba62"} Feb 24 09:11:51 crc kubenswrapper[4822]: I0224 09:11:51.994844 4822 generic.go:334] "Generic (PLEG): container finished" podID="11c12733-bf69-45df-93fa-a4e2faeeed06" containerID="4df2994bf39fe1f8927ce0a9aaf7a73ef2208d7e8688fd29d3686ee95fc147cd" exitCode=0 Feb 24 09:11:51 crc kubenswrapper[4822]: I0224 09:11:51.994880 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f56dfcfc7-88jvt" event={"ID":"11c12733-bf69-45df-93fa-a4e2faeeed06","Type":"ContainerDied","Data":"4df2994bf39fe1f8927ce0a9aaf7a73ef2208d7e8688fd29d3686ee95fc147cd"} Feb 24 09:11:53 crc kubenswrapper[4822]: I0224 09:11:53.099993 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:11:53 crc kubenswrapper[4822]: I0224 09:11:53.569508 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:11:54 crc kubenswrapper[4822]: I0224 09:11:54.502598 4822 patch_prober.go:28] interesting pod/controller-manager-7f56dfcfc7-88jvt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.48:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 09:11:54 crc kubenswrapper[4822]: I0224 09:11:54.502928 4822 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7f56dfcfc7-88jvt" podUID="11c12733-bf69-45df-93fa-a4e2faeeed06" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.48:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 09:11:54 crc kubenswrapper[4822]: I0224 09:11:54.510584 4822 patch_prober.go:28] interesting pod/route-controller-manager-6fb8d577b9-9pfd4 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.49:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 09:11:54 crc kubenswrapper[4822]: I0224 09:11:54.510659 4822 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6fb8d577b9-9pfd4" podUID="610804c2-4d1f-474c-a92c-563810e293dd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.49:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.073938 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f56dfcfc7-88jvt" Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.130519 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5bbc45669c-pf2c9"] Feb 24 09:11:56 crc kubenswrapper[4822]: E0224 09:11:56.131137 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9448daa-c156-4dbe-8ddb-4153f7e83aeb" containerName="pruner" Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.131153 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9448daa-c156-4dbe-8ddb-4153f7e83aeb" containerName="pruner" Feb 24 09:11:56 crc kubenswrapper[4822]: E0224 09:11:56.131164 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73ec63f4-10a6-4388-8bdf-ef28375b82e4" containerName="pruner" Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.131172 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="73ec63f4-10a6-4388-8bdf-ef28375b82e4" containerName="pruner" Feb 24 09:11:56 crc kubenswrapper[4822]: E0224 09:11:56.131186 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11c12733-bf69-45df-93fa-a4e2faeeed06" containerName="controller-manager" Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.131194 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="11c12733-bf69-45df-93fa-a4e2faeeed06" containerName="controller-manager" Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.131330 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9448daa-c156-4dbe-8ddb-4153f7e83aeb" containerName="pruner" Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.131348 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="11c12733-bf69-45df-93fa-a4e2faeeed06" containerName="controller-manager" Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.131360 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="73ec63f4-10a6-4388-8bdf-ef28375b82e4" containerName="pruner" Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.131820 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bbc45669c-pf2c9" Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.137619 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5bbc45669c-pf2c9"] Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.197651 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11c12733-bf69-45df-93fa-a4e2faeeed06-client-ca\") pod \"11c12733-bf69-45df-93fa-a4e2faeeed06\" (UID: \"11c12733-bf69-45df-93fa-a4e2faeeed06\") " Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.197751 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2cvs\" (UniqueName: \"kubernetes.io/projected/11c12733-bf69-45df-93fa-a4e2faeeed06-kube-api-access-h2cvs\") pod \"11c12733-bf69-45df-93fa-a4e2faeeed06\" (UID: \"11c12733-bf69-45df-93fa-a4e2faeeed06\") " Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.198270 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11c12733-bf69-45df-93fa-a4e2faeeed06-config\") pod \"11c12733-bf69-45df-93fa-a4e2faeeed06\" (UID: \"11c12733-bf69-45df-93fa-a4e2faeeed06\") " Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.198374 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11c12733-bf69-45df-93fa-a4e2faeeed06-serving-cert\") pod \"11c12733-bf69-45df-93fa-a4e2faeeed06\" (UID: \"11c12733-bf69-45df-93fa-a4e2faeeed06\") " Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.198422 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/11c12733-bf69-45df-93fa-a4e2faeeed06-proxy-ca-bundles\") pod \"11c12733-bf69-45df-93fa-a4e2faeeed06\" (UID: \"11c12733-bf69-45df-93fa-a4e2faeeed06\") " Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.198689 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0168beb4-ff3a-4410-9f12-8efae0d986c5-config\") pod \"controller-manager-5bbc45669c-pf2c9\" (UID: \"0168beb4-ff3a-4410-9f12-8efae0d986c5\") " pod="openshift-controller-manager/controller-manager-5bbc45669c-pf2c9" Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.198779 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0168beb4-ff3a-4410-9f12-8efae0d986c5-proxy-ca-bundles\") pod \"controller-manager-5bbc45669c-pf2c9\" (UID: \"0168beb4-ff3a-4410-9f12-8efae0d986c5\") " pod="openshift-controller-manager/controller-manager-5bbc45669c-pf2c9" Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.198811 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0168beb4-ff3a-4410-9f12-8efae0d986c5-client-ca\") pod \"controller-manager-5bbc45669c-pf2c9\" (UID: \"0168beb4-ff3a-4410-9f12-8efae0d986c5\") " pod="openshift-controller-manager/controller-manager-5bbc45669c-pf2c9" Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.198950 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11c12733-bf69-45df-93fa-a4e2faeeed06-client-ca" (OuterVolumeSpecName: "client-ca") pod "11c12733-bf69-45df-93fa-a4e2faeeed06" (UID: "11c12733-bf69-45df-93fa-a4e2faeeed06"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.199065 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11c12733-bf69-45df-93fa-a4e2faeeed06-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "11c12733-bf69-45df-93fa-a4e2faeeed06" (UID: "11c12733-bf69-45df-93fa-a4e2faeeed06"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.199361 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11c12733-bf69-45df-93fa-a4e2faeeed06-config" (OuterVolumeSpecName: "config") pod "11c12733-bf69-45df-93fa-a4e2faeeed06" (UID: "11c12733-bf69-45df-93fa-a4e2faeeed06"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.199398 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwbn5\" (UniqueName: \"kubernetes.io/projected/0168beb4-ff3a-4410-9f12-8efae0d986c5-kube-api-access-hwbn5\") pod \"controller-manager-5bbc45669c-pf2c9\" (UID: \"0168beb4-ff3a-4410-9f12-8efae0d986c5\") " pod="openshift-controller-manager/controller-manager-5bbc45669c-pf2c9" Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.199495 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0168beb4-ff3a-4410-9f12-8efae0d986c5-serving-cert\") pod \"controller-manager-5bbc45669c-pf2c9\" (UID: \"0168beb4-ff3a-4410-9f12-8efae0d986c5\") " pod="openshift-controller-manager/controller-manager-5bbc45669c-pf2c9" Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.199578 4822 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11c12733-bf69-45df-93fa-a4e2faeeed06-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.199601 4822 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/11c12733-bf69-45df-93fa-a4e2faeeed06-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.199613 4822 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11c12733-bf69-45df-93fa-a4e2faeeed06-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.213162 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11c12733-bf69-45df-93fa-a4e2faeeed06-kube-api-access-h2cvs" (OuterVolumeSpecName: "kube-api-access-h2cvs") pod "11c12733-bf69-45df-93fa-a4e2faeeed06" (UID: "11c12733-bf69-45df-93fa-a4e2faeeed06"). InnerVolumeSpecName "kube-api-access-h2cvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.213206 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11c12733-bf69-45df-93fa-a4e2faeeed06-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "11c12733-bf69-45df-93fa-a4e2faeeed06" (UID: "11c12733-bf69-45df-93fa-a4e2faeeed06"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.301401 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0168beb4-ff3a-4410-9f12-8efae0d986c5-config\") pod \"controller-manager-5bbc45669c-pf2c9\" (UID: \"0168beb4-ff3a-4410-9f12-8efae0d986c5\") " pod="openshift-controller-manager/controller-manager-5bbc45669c-pf2c9" Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.301473 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0168beb4-ff3a-4410-9f12-8efae0d986c5-proxy-ca-bundles\") pod \"controller-manager-5bbc45669c-pf2c9\" (UID: \"0168beb4-ff3a-4410-9f12-8efae0d986c5\") " pod="openshift-controller-manager/controller-manager-5bbc45669c-pf2c9" Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.301501 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0168beb4-ff3a-4410-9f12-8efae0d986c5-client-ca\") pod \"controller-manager-5bbc45669c-pf2c9\" (UID: \"0168beb4-ff3a-4410-9f12-8efae0d986c5\") " pod="openshift-controller-manager/controller-manager-5bbc45669c-pf2c9" Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.301541 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwbn5\" (UniqueName: \"kubernetes.io/projected/0168beb4-ff3a-4410-9f12-8efae0d986c5-kube-api-access-hwbn5\") pod \"controller-manager-5bbc45669c-pf2c9\" (UID: \"0168beb4-ff3a-4410-9f12-8efae0d986c5\") " pod="openshift-controller-manager/controller-manager-5bbc45669c-pf2c9" Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.301587 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0168beb4-ff3a-4410-9f12-8efae0d986c5-serving-cert\") pod \"controller-manager-5bbc45669c-pf2c9\" (UID: \"0168beb4-ff3a-4410-9f12-8efae0d986c5\") " pod="openshift-controller-manager/controller-manager-5bbc45669c-pf2c9" Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.301633 4822 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11c12733-bf69-45df-93fa-a4e2faeeed06-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.301647 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2cvs\" (UniqueName: \"kubernetes.io/projected/11c12733-bf69-45df-93fa-a4e2faeeed06-kube-api-access-h2cvs\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.303831 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0168beb4-ff3a-4410-9f12-8efae0d986c5-proxy-ca-bundles\") pod \"controller-manager-5bbc45669c-pf2c9\" (UID: \"0168beb4-ff3a-4410-9f12-8efae0d986c5\") " pod="openshift-controller-manager/controller-manager-5bbc45669c-pf2c9" Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.304978 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0168beb4-ff3a-4410-9f12-8efae0d986c5-client-ca\") pod \"controller-manager-5bbc45669c-pf2c9\" (UID: \"0168beb4-ff3a-4410-9f12-8efae0d986c5\") " pod="openshift-controller-manager/controller-manager-5bbc45669c-pf2c9" Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.306510 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0168beb4-ff3a-4410-9f12-8efae0d986c5-serving-cert\") pod \"controller-manager-5bbc45669c-pf2c9\" (UID: \"0168beb4-ff3a-4410-9f12-8efae0d986c5\") " pod="openshift-controller-manager/controller-manager-5bbc45669c-pf2c9" Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.307863 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0168beb4-ff3a-4410-9f12-8efae0d986c5-config\") pod \"controller-manager-5bbc45669c-pf2c9\" (UID: \"0168beb4-ff3a-4410-9f12-8efae0d986c5\") " pod="openshift-controller-manager/controller-manager-5bbc45669c-pf2c9" Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.321769 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwbn5\" (UniqueName: \"kubernetes.io/projected/0168beb4-ff3a-4410-9f12-8efae0d986c5-kube-api-access-hwbn5\") pod \"controller-manager-5bbc45669c-pf2c9\" (UID: \"0168beb4-ff3a-4410-9f12-8efae0d986c5\") " pod="openshift-controller-manager/controller-manager-5bbc45669c-pf2c9" Feb 24 09:11:56 crc kubenswrapper[4822]: I0224 09:11:56.457591 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bbc45669c-pf2c9" Feb 24 09:11:57 crc kubenswrapper[4822]: I0224 09:11:57.022114 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f56dfcfc7-88jvt" event={"ID":"11c12733-bf69-45df-93fa-a4e2faeeed06","Type":"ContainerDied","Data":"e7819ce9eb5d047225044e6189bfbb0434ddd6ff5999d47852953c6328e7b30c"} Feb 24 09:11:57 crc kubenswrapper[4822]: I0224 09:11:57.022202 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f56dfcfc7-88jvt" Feb 24 09:11:57 crc kubenswrapper[4822]: I0224 09:11:57.022266 4822 scope.go:117] "RemoveContainer" containerID="4df2994bf39fe1f8927ce0a9aaf7a73ef2208d7e8688fd29d3686ee95fc147cd" Feb 24 09:11:57 crc kubenswrapper[4822]: I0224 09:11:57.038600 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f56dfcfc7-88jvt"] Feb 24 09:11:57 crc kubenswrapper[4822]: I0224 09:11:57.041484 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7f56dfcfc7-88jvt"] Feb 24 09:11:58 crc kubenswrapper[4822]: I0224 09:11:58.358419 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11c12733-bf69-45df-93fa-a4e2faeeed06" path="/var/lib/kubelet/pods/11c12733-bf69-45df-93fa-a4e2faeeed06/volumes" Feb 24 09:12:02 crc kubenswrapper[4822]: I0224 09:12:02.005177 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fb8d577b9-9pfd4" Feb 24 09:12:02 crc kubenswrapper[4822]: I0224 09:12:02.040029 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65d9cb4575-7znh7"] Feb 24 09:12:02 crc kubenswrapper[4822]: E0224 09:12:02.040269 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="610804c2-4d1f-474c-a92c-563810e293dd" containerName="route-controller-manager" Feb 24 09:12:02 crc kubenswrapper[4822]: I0224 09:12:02.040294 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="610804c2-4d1f-474c-a92c-563810e293dd" containerName="route-controller-manager" Feb 24 09:12:02 crc kubenswrapper[4822]: I0224 09:12:02.040408 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="610804c2-4d1f-474c-a92c-563810e293dd" containerName="route-controller-manager" Feb 24 09:12:02 crc kubenswrapper[4822]: I0224 09:12:02.040814 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65d9cb4575-7znh7" Feb 24 09:12:02 crc kubenswrapper[4822]: I0224 09:12:02.056499 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fb8d577b9-9pfd4" event={"ID":"610804c2-4d1f-474c-a92c-563810e293dd","Type":"ContainerDied","Data":"dacbc2ef540769543e6872d47cd311f17cce778bc32a73362ff7e8c05f7e8934"} Feb 24 09:12:02 crc kubenswrapper[4822]: I0224 09:12:02.056531 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fb8d577b9-9pfd4" Feb 24 09:12:02 crc kubenswrapper[4822]: I0224 09:12:02.056557 4822 scope.go:117] "RemoveContainer" containerID="70a28d7722d4ad236765469d6adf2b52ab625841f15f8c19e0867e937e06ba62" Feb 24 09:12:02 crc kubenswrapper[4822]: I0224 09:12:02.058072 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65d9cb4575-7znh7"] Feb 24 09:12:02 crc kubenswrapper[4822]: I0224 09:12:02.102920 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/610804c2-4d1f-474c-a92c-563810e293dd-serving-cert\") pod \"610804c2-4d1f-474c-a92c-563810e293dd\" (UID: \"610804c2-4d1f-474c-a92c-563810e293dd\") " Feb 24 09:12:02 crc kubenswrapper[4822]: I0224 09:12:02.102967 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/610804c2-4d1f-474c-a92c-563810e293dd-config\") pod \"610804c2-4d1f-474c-a92c-563810e293dd\" (UID: \"610804c2-4d1f-474c-a92c-563810e293dd\") " Feb 24 09:12:02 crc kubenswrapper[4822]: I0224 09:12:02.103024 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmp9k\" (UniqueName: \"kubernetes.io/projected/610804c2-4d1f-474c-a92c-563810e293dd-kube-api-access-xmp9k\") pod \"610804c2-4d1f-474c-a92c-563810e293dd\" (UID: \"610804c2-4d1f-474c-a92c-563810e293dd\") " Feb 24 09:12:02 crc kubenswrapper[4822]: I0224 09:12:02.103046 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/610804c2-4d1f-474c-a92c-563810e293dd-client-ca\") pod \"610804c2-4d1f-474c-a92c-563810e293dd\" (UID: \"610804c2-4d1f-474c-a92c-563810e293dd\") " Feb 24 09:12:02 crc kubenswrapper[4822]: I0224 09:12:02.103144 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04b27ea7-139e-4839-807e-7ca727987352-config\") pod \"route-controller-manager-65d9cb4575-7znh7\" (UID: \"04b27ea7-139e-4839-807e-7ca727987352\") " pod="openshift-route-controller-manager/route-controller-manager-65d9cb4575-7znh7" Feb 24 09:12:02 crc kubenswrapper[4822]: I0224 09:12:02.103188 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04b27ea7-139e-4839-807e-7ca727987352-client-ca\") pod \"route-controller-manager-65d9cb4575-7znh7\" (UID: \"04b27ea7-139e-4839-807e-7ca727987352\") " pod="openshift-route-controller-manager/route-controller-manager-65d9cb4575-7znh7" Feb 24 09:12:02 crc kubenswrapper[4822]: I0224 09:12:02.103206 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg6j2\" (UniqueName: \"kubernetes.io/projected/04b27ea7-139e-4839-807e-7ca727987352-kube-api-access-rg6j2\") pod \"route-controller-manager-65d9cb4575-7znh7\" (UID: \"04b27ea7-139e-4839-807e-7ca727987352\") " pod="openshift-route-controller-manager/route-controller-manager-65d9cb4575-7znh7" Feb 24 09:12:02 crc kubenswrapper[4822]: I0224 09:12:02.103223 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04b27ea7-139e-4839-807e-7ca727987352-serving-cert\") pod \"route-controller-manager-65d9cb4575-7znh7\" (UID: \"04b27ea7-139e-4839-807e-7ca727987352\") " pod="openshift-route-controller-manager/route-controller-manager-65d9cb4575-7znh7" Feb 24 09:12:02 crc kubenswrapper[4822]: I0224 09:12:02.104411 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/610804c2-4d1f-474c-a92c-563810e293dd-client-ca" (OuterVolumeSpecName: "client-ca") pod "610804c2-4d1f-474c-a92c-563810e293dd" (UID: "610804c2-4d1f-474c-a92c-563810e293dd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:12:02 crc kubenswrapper[4822]: I0224 09:12:02.104424 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/610804c2-4d1f-474c-a92c-563810e293dd-config" (OuterVolumeSpecName: "config") pod "610804c2-4d1f-474c-a92c-563810e293dd" (UID: "610804c2-4d1f-474c-a92c-563810e293dd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:12:02 crc kubenswrapper[4822]: I0224 09:12:02.111724 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/610804c2-4d1f-474c-a92c-563810e293dd-kube-api-access-xmp9k" (OuterVolumeSpecName: "kube-api-access-xmp9k") pod "610804c2-4d1f-474c-a92c-563810e293dd" (UID: "610804c2-4d1f-474c-a92c-563810e293dd"). InnerVolumeSpecName "kube-api-access-xmp9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:12:02 crc kubenswrapper[4822]: I0224 09:12:02.111785 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/610804c2-4d1f-474c-a92c-563810e293dd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "610804c2-4d1f-474c-a92c-563810e293dd" (UID: "610804c2-4d1f-474c-a92c-563810e293dd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:12:02 crc kubenswrapper[4822]: I0224 09:12:02.204149 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04b27ea7-139e-4839-807e-7ca727987352-client-ca\") pod \"route-controller-manager-65d9cb4575-7znh7\" (UID: \"04b27ea7-139e-4839-807e-7ca727987352\") " pod="openshift-route-controller-manager/route-controller-manager-65d9cb4575-7znh7" Feb 24 09:12:02 crc kubenswrapper[4822]: I0224 09:12:02.204334 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg6j2\" (UniqueName: \"kubernetes.io/projected/04b27ea7-139e-4839-807e-7ca727987352-kube-api-access-rg6j2\") pod \"route-controller-manager-65d9cb4575-7znh7\" (UID: \"04b27ea7-139e-4839-807e-7ca727987352\") " pod="openshift-route-controller-manager/route-controller-manager-65d9cb4575-7znh7" Feb 24 09:12:02 crc kubenswrapper[4822]: I0224 09:12:02.204358 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04b27ea7-139e-4839-807e-7ca727987352-serving-cert\") pod \"route-controller-manager-65d9cb4575-7znh7\" (UID: \"04b27ea7-139e-4839-807e-7ca727987352\") " pod="openshift-route-controller-manager/route-controller-manager-65d9cb4575-7znh7" Feb 24 09:12:02 crc kubenswrapper[4822]: I0224 09:12:02.204428 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04b27ea7-139e-4839-807e-7ca727987352-config\") pod \"route-controller-manager-65d9cb4575-7znh7\" (UID: \"04b27ea7-139e-4839-807e-7ca727987352\") " pod="openshift-route-controller-manager/route-controller-manager-65d9cb4575-7znh7" Feb 24 09:12:02 crc kubenswrapper[4822]: I0224 09:12:02.204466 4822 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/610804c2-4d1f-474c-a92c-563810e293dd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:02 crc kubenswrapper[4822]: I0224 09:12:02.204477 4822 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/610804c2-4d1f-474c-a92c-563810e293dd-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:02 crc kubenswrapper[4822]: I0224 09:12:02.204486 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmp9k\" (UniqueName: \"kubernetes.io/projected/610804c2-4d1f-474c-a92c-563810e293dd-kube-api-access-xmp9k\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:02 crc kubenswrapper[4822]: I0224 09:12:02.204496 4822 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/610804c2-4d1f-474c-a92c-563810e293dd-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:02 crc kubenswrapper[4822]: I0224 09:12:02.205687 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04b27ea7-139e-4839-807e-7ca727987352-config\") pod \"route-controller-manager-65d9cb4575-7znh7\" (UID: \"04b27ea7-139e-4839-807e-7ca727987352\") " pod="openshift-route-controller-manager/route-controller-manager-65d9cb4575-7znh7" Feb 24 09:12:02 crc kubenswrapper[4822]: I0224 09:12:02.206298 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04b27ea7-139e-4839-807e-7ca727987352-client-ca\") pod \"route-controller-manager-65d9cb4575-7znh7\" (UID: \"04b27ea7-139e-4839-807e-7ca727987352\") " pod="openshift-route-controller-manager/route-controller-manager-65d9cb4575-7znh7" Feb 24 09:12:02 crc kubenswrapper[4822]: I0224 09:12:02.211292 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04b27ea7-139e-4839-807e-7ca727987352-serving-cert\") pod \"route-controller-manager-65d9cb4575-7znh7\" (UID: \"04b27ea7-139e-4839-807e-7ca727987352\") " pod="openshift-route-controller-manager/route-controller-manager-65d9cb4575-7znh7" Feb 24 09:12:02 crc kubenswrapper[4822]: I0224 09:12:02.220984 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg6j2\" (UniqueName: \"kubernetes.io/projected/04b27ea7-139e-4839-807e-7ca727987352-kube-api-access-rg6j2\") pod \"route-controller-manager-65d9cb4575-7znh7\" (UID: \"04b27ea7-139e-4839-807e-7ca727987352\") " pod="openshift-route-controller-manager/route-controller-manager-65d9cb4575-7znh7" Feb 24 09:12:02 crc kubenswrapper[4822]: I0224 09:12:02.387428 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65d9cb4575-7znh7" Feb 24 09:12:02 crc kubenswrapper[4822]: I0224 09:12:02.401974 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb8d577b9-9pfd4"] Feb 24 09:12:02 crc kubenswrapper[4822]: I0224 09:12:02.404489 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb8d577b9-9pfd4"] Feb 24 09:12:02 crc kubenswrapper[4822]: I0224 09:12:02.526364 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5bbc45669c-pf2c9"] Feb 24 09:12:02 crc kubenswrapper[4822]: I0224 09:12:02.631020 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65d9cb4575-7znh7"] Feb 24 09:12:02 crc kubenswrapper[4822]: W0224 09:12:02.634797 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04b27ea7_139e_4839_807e_7ca727987352.slice/crio-a71a889f5fbd67df18343cf5c71d936398036f40c0150502a1c74f2abd85fa63 WatchSource:0}: Error finding container a71a889f5fbd67df18343cf5c71d936398036f40c0150502a1c74f2abd85fa63: Status 404 returned error can't find the container with id a71a889f5fbd67df18343cf5c71d936398036f40c0150502a1c74f2abd85fa63 Feb 24 09:12:03 crc kubenswrapper[4822]: I0224 09:12:03.063804 4822 generic.go:334] "Generic (PLEG): container finished" podID="4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9" containerID="b9fc45af2a7140ef7e6abf7a716bcec1a332673f662044bc55b763b622db0824" exitCode=0 Feb 24 09:12:03 crc kubenswrapper[4822]: I0224 09:12:03.063868 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsxsb" event={"ID":"4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9","Type":"ContainerDied","Data":"b9fc45af2a7140ef7e6abf7a716bcec1a332673f662044bc55b763b622db0824"} Feb 24 09:12:03 crc kubenswrapper[4822]: I0224 09:12:03.065297 4822 generic.go:334] "Generic (PLEG): container finished" podID="f5e41e0d-dd96-43df-94f6-f004923b10a3" containerID="b2e4364f059047c273e4b4e16604adaa769cd399b3e96bcecffbb31e692030ea" exitCode=0 Feb 24 09:12:03 crc kubenswrapper[4822]: I0224 09:12:03.065342 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wplj" event={"ID":"f5e41e0d-dd96-43df-94f6-f004923b10a3","Type":"ContainerDied","Data":"b2e4364f059047c273e4b4e16604adaa769cd399b3e96bcecffbb31e692030ea"} Feb 24 09:12:03 crc kubenswrapper[4822]: I0224 09:12:03.067077 4822 generic.go:334] "Generic (PLEG): container finished" podID="46112603-12a1-4bde-8442-c9675eb2c5f0" containerID="9e8468beeae009df685ad3be77ab1c112a2f05f7417c48b3927b50f73d4f4dd1" exitCode=0 Feb 24 09:12:03 crc kubenswrapper[4822]: I0224 09:12:03.067157 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z68xq" event={"ID":"46112603-12a1-4bde-8442-c9675eb2c5f0","Type":"ContainerDied","Data":"9e8468beeae009df685ad3be77ab1c112a2f05f7417c48b3927b50f73d4f4dd1"} Feb 24 09:12:03 crc kubenswrapper[4822]: I0224 09:12:03.075712 4822 generic.go:334] "Generic (PLEG): container finished" podID="2a5b1d8c-f894-4c8a-b82b-052aa58260e2" containerID="47e09fba63bd6a559d7d81deef538d408a7da5f090e93e7a2c534719ecac5e3e" exitCode=0 Feb 24 09:12:03 crc kubenswrapper[4822]: I0224 09:12:03.075792 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-56bwc" event={"ID":"2a5b1d8c-f894-4c8a-b82b-052aa58260e2","Type":"ContainerDied","Data":"47e09fba63bd6a559d7d81deef538d408a7da5f090e93e7a2c534719ecac5e3e"} Feb 24 09:12:03 crc kubenswrapper[4822]: I0224 09:12:03.086555 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bbc45669c-pf2c9" event={"ID":"0168beb4-ff3a-4410-9f12-8efae0d986c5","Type":"ContainerStarted","Data":"aca0b736270c26d881ff8124bbdb036b96117d98cbf28eff08248aa19b861787"} Feb 24 09:12:03 crc kubenswrapper[4822]: I0224 09:12:03.086599 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bbc45669c-pf2c9" event={"ID":"0168beb4-ff3a-4410-9f12-8efae0d986c5","Type":"ContainerStarted","Data":"ea22d60ee40f7ab0860e76b33f99fbb0fae387be6bd34aa4fb7db181cd0490b0"} Feb 24 09:12:03 crc kubenswrapper[4822]: I0224 09:12:03.101269 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tmzzx" event={"ID":"bb158e82-c2df-449e-a2ad-a73731f5965b","Type":"ContainerStarted","Data":"6de92bdb09cf07e9cffbd517d60960696184fd0f9159a76aea5b00183891402f"} Feb 24 09:12:03 crc kubenswrapper[4822]: I0224 09:12:03.106371 4822 generic.go:334] "Generic (PLEG): container finished" podID="02875d0b-b186-4481-bf06-923e1d91f53f" containerID="87c2a4ada017ce6469931ab478c3282afb4e87d9c134c3878fa6ff967bc55abe" exitCode=0 Feb 24 09:12:03 crc kubenswrapper[4822]: I0224 09:12:03.106444 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcvsl" event={"ID":"02875d0b-b186-4481-bf06-923e1d91f53f","Type":"ContainerDied","Data":"87c2a4ada017ce6469931ab478c3282afb4e87d9c134c3878fa6ff967bc55abe"} Feb 24 09:12:03 crc kubenswrapper[4822]: I0224 09:12:03.113368 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xhmth" event={"ID":"70973b60-6421-4c72-b5ba-b5ad78d060e7","Type":"ContainerStarted","Data":"ff6bfa4f2203786406a2a9b57b2529103b93f50d79a2b023fa710ea066df8430"} Feb 24 09:12:03 crc kubenswrapper[4822]: I0224 09:12:03.117824 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65d9cb4575-7znh7" event={"ID":"04b27ea7-139e-4839-807e-7ca727987352","Type":"ContainerStarted","Data":"a71a889f5fbd67df18343cf5c71d936398036f40c0150502a1c74f2abd85fa63"} Feb 24 09:12:03 crc kubenswrapper[4822]: I0224 09:12:03.120379 4822 generic.go:334] "Generic (PLEG): container finished" podID="b90902ec-35f8-4f8e-8d81-b813f439629c" containerID="5b8889fbfade3969646cd51dc33c763521160c41a4e2cda470460e27f70fb961" exitCode=0 Feb 24 09:12:03 crc kubenswrapper[4822]: I0224 09:12:03.120419 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jm48n" event={"ID":"b90902ec-35f8-4f8e-8d81-b813f439629c","Type":"ContainerDied","Data":"5b8889fbfade3969646cd51dc33c763521160c41a4e2cda470460e27f70fb961"} Feb 24 09:12:04 crc kubenswrapper[4822]: I0224 09:12:04.021318 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vct48"] Feb 24 09:12:04 crc kubenswrapper[4822]: I0224 09:12:04.126978 4822 generic.go:334] "Generic (PLEG): container finished" podID="bb158e82-c2df-449e-a2ad-a73731f5965b" containerID="6de92bdb09cf07e9cffbd517d60960696184fd0f9159a76aea5b00183891402f" exitCode=0 Feb 24 09:12:04 crc kubenswrapper[4822]: I0224 09:12:04.127065 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tmzzx" event={"ID":"bb158e82-c2df-449e-a2ad-a73731f5965b","Type":"ContainerDied","Data":"6de92bdb09cf07e9cffbd517d60960696184fd0f9159a76aea5b00183891402f"} Feb 24 09:12:04 crc kubenswrapper[4822]: I0224 09:12:04.128866 4822 generic.go:334] "Generic (PLEG): container finished" podID="70973b60-6421-4c72-b5ba-b5ad78d060e7" containerID="ff6bfa4f2203786406a2a9b57b2529103b93f50d79a2b023fa710ea066df8430" exitCode=0 Feb 24 09:12:04 crc kubenswrapper[4822]: I0224 09:12:04.128927 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xhmth" event={"ID":"70973b60-6421-4c72-b5ba-b5ad78d060e7","Type":"ContainerDied","Data":"ff6bfa4f2203786406a2a9b57b2529103b93f50d79a2b023fa710ea066df8430"} Feb 24 09:12:04 crc kubenswrapper[4822]: I0224 09:12:04.131945 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65d9cb4575-7znh7" event={"ID":"04b27ea7-139e-4839-807e-7ca727987352","Type":"ContainerStarted","Data":"9ef62dad55c56d03a1c44d2be62bb19de256a848d8396dc3222e90632dcd46a7"} Feb 24 09:12:04 crc kubenswrapper[4822]: I0224 09:12:04.132205 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5bbc45669c-pf2c9" Feb 24 09:12:04 crc kubenswrapper[4822]: I0224 09:12:04.132237 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-65d9cb4575-7znh7" Feb 24 09:12:04 crc kubenswrapper[4822]: I0224 09:12:04.136276 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5bbc45669c-pf2c9" Feb 24 09:12:04 crc kubenswrapper[4822]: I0224 09:12:04.137171 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-65d9cb4575-7znh7" Feb 24 09:12:04 crc kubenswrapper[4822]: I0224 09:12:04.162581 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-65d9cb4575-7znh7" podStartSLOduration=13.162563938 podStartE2EDuration="13.162563938s" podCreationTimestamp="2026-02-24 09:11:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:12:04.1608717 +0000 UTC m=+246.548634248" watchObservedRunningTime="2026-02-24 09:12:04.162563938 +0000 UTC m=+246.550326486" Feb 24 09:12:04 crc kubenswrapper[4822]: I0224 09:12:04.183521 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5bbc45669c-pf2c9" podStartSLOduration=13.183502762 podStartE2EDuration="13.183502762s" podCreationTimestamp="2026-02-24 09:11:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:12:04.182749601 +0000 UTC m=+246.570512149" watchObservedRunningTime="2026-02-24 09:12:04.183502762 +0000 UTC m=+246.571265300" Feb 24 09:12:04 crc kubenswrapper[4822]: I0224 09:12:04.343904 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="610804c2-4d1f-474c-a92c-563810e293dd" path="/var/lib/kubelet/pods/610804c2-4d1f-474c-a92c-563810e293dd/volumes" Feb 24 09:12:05 crc kubenswrapper[4822]: I0224 09:12:05.150453 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsxsb" event={"ID":"4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9","Type":"ContainerStarted","Data":"95cc63c47f1e01dfb821cc2d9b933533d01d42ba44eb1fe3d6b184fc824bbec5"} Feb 24 09:12:05 crc kubenswrapper[4822]: I0224 09:12:05.181428 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fsxsb" podStartSLOduration=3.486330501 podStartE2EDuration="34.181405121s" podCreationTimestamp="2026-02-24 09:11:31 +0000 UTC" firstStartedPulling="2026-02-24 09:11:33.620150981 +0000 UTC m=+216.007913529" lastFinishedPulling="2026-02-24 09:12:04.315225601 +0000 UTC m=+246.702988149" observedRunningTime="2026-02-24 09:12:05.179054045 +0000 UTC m=+247.566816643" watchObservedRunningTime="2026-02-24 09:12:05.181405121 +0000 UTC m=+247.569167679" Feb 24 09:12:06 crc kubenswrapper[4822]: I0224 09:12:06.160062 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wplj" event={"ID":"f5e41e0d-dd96-43df-94f6-f004923b10a3","Type":"ContainerStarted","Data":"fda0b19661a99d69d7e1b550dc0c4960c859b87dd8ca7a416bb03f5489e23e6c"} Feb 24 09:12:06 crc kubenswrapper[4822]: I0224 09:12:06.336508 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mv4t2" Feb 24 09:12:07 crc kubenswrapper[4822]: I0224 09:12:07.187508 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7wplj" podStartSLOduration=4.105934306 podStartE2EDuration="36.187478484s" podCreationTimestamp="2026-02-24 09:11:31 +0000 UTC" firstStartedPulling="2026-02-24 09:11:33.657693571 +0000 UTC m=+216.045456129" lastFinishedPulling="2026-02-24 09:12:05.739237769 +0000 UTC m=+248.127000307" observedRunningTime="2026-02-24 09:12:07.183104231 +0000 UTC m=+249.570866819" watchObservedRunningTime="2026-02-24 09:12:07.187478484 +0000 UTC m=+249.575241072" Feb 24 09:12:08 crc kubenswrapper[4822]: I0224 09:12:08.172848 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jm48n" event={"ID":"b90902ec-35f8-4f8e-8d81-b813f439629c","Type":"ContainerStarted","Data":"1999590c55924a6d9378a9af61f93fc2dc16676f6ea23f3a0d3be281096b8882"} Feb 24 09:12:09 crc kubenswrapper[4822]: I0224 09:12:09.179189 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z68xq" event={"ID":"46112603-12a1-4bde-8442-c9675eb2c5f0","Type":"ContainerStarted","Data":"a895d078ba6901f19797a44ca503267bc6b76bda4f45d736dbcddabf898a94a3"} Feb 24 09:12:09 crc kubenswrapper[4822]: I0224 09:12:09.195852 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z68xq" podStartSLOduration=2.515397076 podStartE2EDuration="37.195825241s" podCreationTimestamp="2026-02-24 09:11:32 +0000 UTC" firstStartedPulling="2026-02-24 09:11:33.558715898 +0000 UTC m=+215.946478436" lastFinishedPulling="2026-02-24 09:12:08.239144053 +0000 UTC m=+250.626906601" observedRunningTime="2026-02-24 09:12:09.194808821 +0000 UTC m=+251.582571369" watchObservedRunningTime="2026-02-24 09:12:09.195825241 +0000 UTC m=+251.583587789" Feb 24 09:12:09 crc kubenswrapper[4822]: I0224 09:12:09.215151 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jm48n" podStartSLOduration=5.086784984 podStartE2EDuration="36.21512562s" podCreationTimestamp="2026-02-24 09:11:33 +0000 UTC" firstStartedPulling="2026-02-24 09:11:35.81931164 +0000 UTC m=+218.207074188" lastFinishedPulling="2026-02-24 09:12:06.947652276 +0000 UTC m=+249.335414824" observedRunningTime="2026-02-24 09:12:09.214190253 +0000 UTC m=+251.601952801" watchObservedRunningTime="2026-02-24 09:12:09.21512562 +0000 UTC m=+251.602888168" Feb 24 09:12:09 crc kubenswrapper[4822]: I0224 09:12:09.519925 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 24 09:12:09 crc kubenswrapper[4822]: I0224 09:12:09.520771 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 09:12:09 crc kubenswrapper[4822]: I0224 09:12:09.522641 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 24 09:12:09 crc kubenswrapper[4822]: I0224 09:12:09.523108 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 24 09:12:09 crc kubenswrapper[4822]: I0224 09:12:09.530891 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 24 09:12:09 crc kubenswrapper[4822]: I0224 09:12:09.604052 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eba5255f-d8e3-43d0-8610-b18152fdaa48-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"eba5255f-d8e3-43d0-8610-b18152fdaa48\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 09:12:09 crc kubenswrapper[4822]: I0224 09:12:09.604152 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eba5255f-d8e3-43d0-8610-b18152fdaa48-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"eba5255f-d8e3-43d0-8610-b18152fdaa48\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 09:12:09 crc kubenswrapper[4822]: I0224 09:12:09.706396 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eba5255f-d8e3-43d0-8610-b18152fdaa48-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"eba5255f-d8e3-43d0-8610-b18152fdaa48\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 09:12:09 crc kubenswrapper[4822]: I0224 09:12:09.706477 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eba5255f-d8e3-43d0-8610-b18152fdaa48-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"eba5255f-d8e3-43d0-8610-b18152fdaa48\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 09:12:09 crc kubenswrapper[4822]: I0224 09:12:09.706688 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eba5255f-d8e3-43d0-8610-b18152fdaa48-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"eba5255f-d8e3-43d0-8610-b18152fdaa48\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 09:12:09 crc kubenswrapper[4822]: I0224 09:12:09.730672 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eba5255f-d8e3-43d0-8610-b18152fdaa48-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"eba5255f-d8e3-43d0-8610-b18152fdaa48\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 09:12:09 crc kubenswrapper[4822]: I0224 09:12:09.833064 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 09:12:10 crc kubenswrapper[4822]: I0224 09:12:10.188122 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xhmth" event={"ID":"70973b60-6421-4c72-b5ba-b5ad78d060e7","Type":"ContainerStarted","Data":"92f58103c2a6063ca065bff30dadef2e9d3892ff8a6581516e53abc148bd6c89"} Feb 24 09:12:10 crc kubenswrapper[4822]: I0224 09:12:10.194813 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tmzzx" event={"ID":"bb158e82-c2df-449e-a2ad-a73731f5965b","Type":"ContainerStarted","Data":"4c03751fc76eb0bb481b09fcfb35582f3f3639658b34add5eb47204b3de03ce7"} Feb 24 09:12:10 crc kubenswrapper[4822]: I0224 09:12:10.197569 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-56bwc" event={"ID":"2a5b1d8c-f894-4c8a-b82b-052aa58260e2","Type":"ContainerStarted","Data":"3f0b0c59cc8335f49603e37dcd092f74422df037ac2ba5cd1a7c318d4167763c"} Feb 24 09:12:10 crc kubenswrapper[4822]: I0224 09:12:10.201744 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcvsl" event={"ID":"02875d0b-b186-4481-bf06-923e1d91f53f","Type":"ContainerStarted","Data":"53027f624371cf0a9591ded2e469fb596208ceab57292522484c3323b029a3af"} Feb 24 09:12:10 crc kubenswrapper[4822]: I0224 09:12:10.208936 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xhmth" podStartSLOduration=3.808366683 podStartE2EDuration="36.208919942s" podCreationTimestamp="2026-02-24 09:11:34 +0000 UTC" firstStartedPulling="2026-02-24 09:11:36.849863676 +0000 UTC m=+219.237626224" lastFinishedPulling="2026-02-24 09:12:09.250416935 +0000 UTC m=+251.638179483" observedRunningTime="2026-02-24 09:12:10.206939488 +0000 UTC m=+252.594702036" watchObservedRunningTime="2026-02-24 09:12:10.208919942 +0000 UTC m=+252.596682490" Feb 24 09:12:10 crc kubenswrapper[4822]: I0224 09:12:10.227412 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-56bwc" podStartSLOduration=2.5481825110000003 podStartE2EDuration="38.227394439s" podCreationTimestamp="2026-02-24 09:11:32 +0000 UTC" firstStartedPulling="2026-02-24 09:11:33.571042652 +0000 UTC m=+215.958805200" lastFinishedPulling="2026-02-24 09:12:09.25025458 +0000 UTC m=+251.638017128" observedRunningTime="2026-02-24 09:12:10.224144018 +0000 UTC m=+252.611906566" watchObservedRunningTime="2026-02-24 09:12:10.227394439 +0000 UTC m=+252.615156977" Feb 24 09:12:10 crc kubenswrapper[4822]: I0224 09:12:10.242955 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tmzzx" podStartSLOduration=2.842207289 podStartE2EDuration="35.242939812s" podCreationTimestamp="2026-02-24 09:11:35 +0000 UTC" firstStartedPulling="2026-02-24 09:11:36.856428261 +0000 UTC m=+219.244190809" lastFinishedPulling="2026-02-24 09:12:09.257160784 +0000 UTC m=+251.644923332" observedRunningTime="2026-02-24 09:12:10.237617085 +0000 UTC m=+252.625379633" watchObservedRunningTime="2026-02-24 09:12:10.242939812 +0000 UTC m=+252.630702360" Feb 24 09:12:10 crc kubenswrapper[4822]: I0224 09:12:10.289610 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lcvsl" podStartSLOduration=3.880686486 podStartE2EDuration="36.289591755s" podCreationTimestamp="2026-02-24 09:11:34 +0000 UTC" firstStartedPulling="2026-02-24 09:11:36.84557739 +0000 UTC m=+219.233339938" lastFinishedPulling="2026-02-24 09:12:09.254482659 +0000 UTC m=+251.642245207" observedRunningTime="2026-02-24 09:12:10.259656999 +0000 UTC m=+252.647419547" watchObservedRunningTime="2026-02-24 09:12:10.289591755 +0000 UTC m=+252.677354303" Feb 24 09:12:10 crc kubenswrapper[4822]: I0224 09:12:10.290676 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 24 09:12:10 crc kubenswrapper[4822]: W0224 09:12:10.297144 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podeba5255f_d8e3_43d0_8610_b18152fdaa48.slice/crio-5e6b4933c7158645fdf6aeecb085e5fa230f648aa69480f552d02ce5d1703a0d WatchSource:0}: Error finding container 5e6b4933c7158645fdf6aeecb085e5fa230f648aa69480f552d02ce5d1703a0d: Status 404 returned error can't find the container with id 5e6b4933c7158645fdf6aeecb085e5fa230f648aa69480f552d02ce5d1703a0d Feb 24 09:12:10 crc kubenswrapper[4822]: I0224 09:12:10.976714 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5bbc45669c-pf2c9"] Feb 24 09:12:10 crc kubenswrapper[4822]: I0224 09:12:10.977152 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5bbc45669c-pf2c9" podUID="0168beb4-ff3a-4410-9f12-8efae0d986c5" containerName="controller-manager" containerID="cri-o://aca0b736270c26d881ff8124bbdb036b96117d98cbf28eff08248aa19b861787" gracePeriod=30 Feb 24 09:12:11 crc kubenswrapper[4822]: I0224 09:12:11.089767 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65d9cb4575-7znh7"] Feb 24 09:12:11 crc kubenswrapper[4822]: I0224 09:12:11.089973 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-65d9cb4575-7znh7" podUID="04b27ea7-139e-4839-807e-7ca727987352" containerName="route-controller-manager" containerID="cri-o://9ef62dad55c56d03a1c44d2be62bb19de256a848d8396dc3222e90632dcd46a7" gracePeriod=30 Feb 24 09:12:11 crc kubenswrapper[4822]: I0224 09:12:11.214544 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"eba5255f-d8e3-43d0-8610-b18152fdaa48","Type":"ContainerStarted","Data":"1f60a01fd8c140e9cfc2dc66beb25549e38edaa84e5b8044e26221ceb16a9ba7"} Feb 24 09:12:11 crc kubenswrapper[4822]: I0224 09:12:11.214581 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"eba5255f-d8e3-43d0-8610-b18152fdaa48","Type":"ContainerStarted","Data":"5e6b4933c7158645fdf6aeecb085e5fa230f648aa69480f552d02ce5d1703a0d"} Feb 24 09:12:11 crc kubenswrapper[4822]: I0224 09:12:11.216123 4822 generic.go:334] "Generic (PLEG): container finished" podID="0168beb4-ff3a-4410-9f12-8efae0d986c5" containerID="aca0b736270c26d881ff8124bbdb036b96117d98cbf28eff08248aa19b861787" exitCode=0 Feb 24 09:12:11 crc kubenswrapper[4822]: I0224 09:12:11.217142 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bbc45669c-pf2c9" event={"ID":"0168beb4-ff3a-4410-9f12-8efae0d986c5","Type":"ContainerDied","Data":"aca0b736270c26d881ff8124bbdb036b96117d98cbf28eff08248aa19b861787"} Feb 24 09:12:11 crc kubenswrapper[4822]: I0224 09:12:11.561187 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65d9cb4575-7znh7" Feb 24 09:12:11 crc kubenswrapper[4822]: I0224 09:12:11.569397 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bbc45669c-pf2c9" Feb 24 09:12:11 crc kubenswrapper[4822]: I0224 09:12:11.633349 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04b27ea7-139e-4839-807e-7ca727987352-serving-cert\") pod \"04b27ea7-139e-4839-807e-7ca727987352\" (UID: \"04b27ea7-139e-4839-807e-7ca727987352\") " Feb 24 09:12:11 crc kubenswrapper[4822]: I0224 09:12:11.633426 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0168beb4-ff3a-4410-9f12-8efae0d986c5-proxy-ca-bundles\") pod \"0168beb4-ff3a-4410-9f12-8efae0d986c5\" (UID: \"0168beb4-ff3a-4410-9f12-8efae0d986c5\") " Feb 24 09:12:11 crc kubenswrapper[4822]: I0224 09:12:11.633449 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0168beb4-ff3a-4410-9f12-8efae0d986c5-serving-cert\") pod \"0168beb4-ff3a-4410-9f12-8efae0d986c5\" (UID: \"0168beb4-ff3a-4410-9f12-8efae0d986c5\") " Feb 24 09:12:11 crc kubenswrapper[4822]: I0224 09:12:11.633464 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0168beb4-ff3a-4410-9f12-8efae0d986c5-client-ca\") pod \"0168beb4-ff3a-4410-9f12-8efae0d986c5\" (UID: \"0168beb4-ff3a-4410-9f12-8efae0d986c5\") " Feb 24 09:12:11 crc kubenswrapper[4822]: I0224 09:12:11.633488 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04b27ea7-139e-4839-807e-7ca727987352-config\") pod \"04b27ea7-139e-4839-807e-7ca727987352\" (UID: \"04b27ea7-139e-4839-807e-7ca727987352\") " Feb 24 09:12:11 crc kubenswrapper[4822]: I0224 09:12:11.633504 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0168beb4-ff3a-4410-9f12-8efae0d986c5-config\") pod \"0168beb4-ff3a-4410-9f12-8efae0d986c5\" (UID: \"0168beb4-ff3a-4410-9f12-8efae0d986c5\") " Feb 24 09:12:11 crc kubenswrapper[4822]: I0224 09:12:11.633523 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg6j2\" (UniqueName: \"kubernetes.io/projected/04b27ea7-139e-4839-807e-7ca727987352-kube-api-access-rg6j2\") pod \"04b27ea7-139e-4839-807e-7ca727987352\" (UID: \"04b27ea7-139e-4839-807e-7ca727987352\") " Feb 24 09:12:11 crc kubenswrapper[4822]: I0224 09:12:11.633572 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwbn5\" (UniqueName: \"kubernetes.io/projected/0168beb4-ff3a-4410-9f12-8efae0d986c5-kube-api-access-hwbn5\") pod \"0168beb4-ff3a-4410-9f12-8efae0d986c5\" (UID: \"0168beb4-ff3a-4410-9f12-8efae0d986c5\") " Feb 24 09:12:11 crc kubenswrapper[4822]: I0224 09:12:11.633594 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04b27ea7-139e-4839-807e-7ca727987352-client-ca\") pod \"04b27ea7-139e-4839-807e-7ca727987352\" (UID: \"04b27ea7-139e-4839-807e-7ca727987352\") " Feb 24 09:12:11 crc kubenswrapper[4822]: I0224 09:12:11.634298 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04b27ea7-139e-4839-807e-7ca727987352-client-ca" (OuterVolumeSpecName: "client-ca") pod "04b27ea7-139e-4839-807e-7ca727987352" (UID: "04b27ea7-139e-4839-807e-7ca727987352"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:12:11 crc kubenswrapper[4822]: I0224 09:12:11.634728 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04b27ea7-139e-4839-807e-7ca727987352-config" (OuterVolumeSpecName: "config") pod "04b27ea7-139e-4839-807e-7ca727987352" (UID: "04b27ea7-139e-4839-807e-7ca727987352"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:12:11 crc kubenswrapper[4822]: I0224 09:12:11.634866 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0168beb4-ff3a-4410-9f12-8efae0d986c5-config" (OuterVolumeSpecName: "config") pod "0168beb4-ff3a-4410-9f12-8efae0d986c5" (UID: "0168beb4-ff3a-4410-9f12-8efae0d986c5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:12:11 crc kubenswrapper[4822]: I0224 09:12:11.635109 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0168beb4-ff3a-4410-9f12-8efae0d986c5-client-ca" (OuterVolumeSpecName: "client-ca") pod "0168beb4-ff3a-4410-9f12-8efae0d986c5" (UID: "0168beb4-ff3a-4410-9f12-8efae0d986c5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:12:11 crc kubenswrapper[4822]: I0224 09:12:11.635472 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0168beb4-ff3a-4410-9f12-8efae0d986c5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0168beb4-ff3a-4410-9f12-8efae0d986c5" (UID: "0168beb4-ff3a-4410-9f12-8efae0d986c5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:12:11 crc kubenswrapper[4822]: I0224 09:12:11.642066 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0168beb4-ff3a-4410-9f12-8efae0d986c5-kube-api-access-hwbn5" (OuterVolumeSpecName: "kube-api-access-hwbn5") pod "0168beb4-ff3a-4410-9f12-8efae0d986c5" (UID: "0168beb4-ff3a-4410-9f12-8efae0d986c5"). InnerVolumeSpecName "kube-api-access-hwbn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:12:11 crc kubenswrapper[4822]: I0224 09:12:11.645069 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0168beb4-ff3a-4410-9f12-8efae0d986c5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0168beb4-ff3a-4410-9f12-8efae0d986c5" (UID: "0168beb4-ff3a-4410-9f12-8efae0d986c5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:12:11 crc kubenswrapper[4822]: I0224 09:12:11.645095 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04b27ea7-139e-4839-807e-7ca727987352-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "04b27ea7-139e-4839-807e-7ca727987352" (UID: "04b27ea7-139e-4839-807e-7ca727987352"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:12:11 crc kubenswrapper[4822]: I0224 09:12:11.646674 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04b27ea7-139e-4839-807e-7ca727987352-kube-api-access-rg6j2" (OuterVolumeSpecName: "kube-api-access-rg6j2") pod "04b27ea7-139e-4839-807e-7ca727987352" (UID: "04b27ea7-139e-4839-807e-7ca727987352"). InnerVolumeSpecName "kube-api-access-rg6j2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:12:11 crc kubenswrapper[4822]: I0224 09:12:11.738842 4822 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04b27ea7-139e-4839-807e-7ca727987352-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:11 crc kubenswrapper[4822]: I0224 09:12:11.739162 4822 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0168beb4-ff3a-4410-9f12-8efae0d986c5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:11 crc kubenswrapper[4822]: I0224 09:12:11.739175 4822 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0168beb4-ff3a-4410-9f12-8efae0d986c5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:11 crc kubenswrapper[4822]: I0224 09:12:11.739183 4822 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0168beb4-ff3a-4410-9f12-8efae0d986c5-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:11 crc kubenswrapper[4822]: I0224 09:12:11.739198 4822 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04b27ea7-139e-4839-807e-7ca727987352-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:11 crc kubenswrapper[4822]: I0224 09:12:11.739207 4822 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0168beb4-ff3a-4410-9f12-8efae0d986c5-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:11 crc kubenswrapper[4822]: I0224 09:12:11.739216 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rg6j2\" (UniqueName: \"kubernetes.io/projected/04b27ea7-139e-4839-807e-7ca727987352-kube-api-access-rg6j2\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:11 crc kubenswrapper[4822]: I0224 09:12:11.739228 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwbn5\" (UniqueName: \"kubernetes.io/projected/0168beb4-ff3a-4410-9f12-8efae0d986c5-kube-api-access-hwbn5\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:11 crc kubenswrapper[4822]: I0224 09:12:11.739236 4822 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04b27ea7-139e-4839-807e-7ca727987352-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.078756 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fsxsb" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.078813 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fsxsb" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.151620 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7776888b98-lsztf"] Feb 24 09:12:12 crc kubenswrapper[4822]: E0224 09:12:12.151935 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04b27ea7-139e-4839-807e-7ca727987352" containerName="route-controller-manager" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.151952 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="04b27ea7-139e-4839-807e-7ca727987352" containerName="route-controller-manager" Feb 24 09:12:12 crc kubenswrapper[4822]: E0224 09:12:12.151968 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0168beb4-ff3a-4410-9f12-8efae0d986c5" containerName="controller-manager" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.151976 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="0168beb4-ff3a-4410-9f12-8efae0d986c5" containerName="controller-manager" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.152102 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="0168beb4-ff3a-4410-9f12-8efae0d986c5" containerName="controller-manager" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.152120 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="04b27ea7-139e-4839-807e-7ca727987352" containerName="route-controller-manager" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.152556 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7776888b98-lsztf" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.153973 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5f8d6577d6-6bnv5"] Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.154648 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f8d6577d6-6bnv5" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.161947 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7776888b98-lsztf"] Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.189457 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f8d6577d6-6bnv5"] Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.224168 4822 generic.go:334] "Generic (PLEG): container finished" podID="eba5255f-d8e3-43d0-8610-b18152fdaa48" containerID="1f60a01fd8c140e9cfc2dc66beb25549e38edaa84e5b8044e26221ceb16a9ba7" exitCode=0 Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.224411 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"eba5255f-d8e3-43d0-8610-b18152fdaa48","Type":"ContainerDied","Data":"1f60a01fd8c140e9cfc2dc66beb25549e38edaa84e5b8044e26221ceb16a9ba7"} Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.227028 4822 generic.go:334] "Generic (PLEG): container finished" podID="04b27ea7-139e-4839-807e-7ca727987352" containerID="9ef62dad55c56d03a1c44d2be62bb19de256a848d8396dc3222e90632dcd46a7" exitCode=0 Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.227105 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65d9cb4575-7znh7" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.227119 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65d9cb4575-7znh7" event={"ID":"04b27ea7-139e-4839-807e-7ca727987352","Type":"ContainerDied","Data":"9ef62dad55c56d03a1c44d2be62bb19de256a848d8396dc3222e90632dcd46a7"} Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.227150 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65d9cb4575-7znh7" event={"ID":"04b27ea7-139e-4839-807e-7ca727987352","Type":"ContainerDied","Data":"a71a889f5fbd67df18343cf5c71d936398036f40c0150502a1c74f2abd85fa63"} Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.227166 4822 scope.go:117] "RemoveContainer" containerID="9ef62dad55c56d03a1c44d2be62bb19de256a848d8396dc3222e90632dcd46a7" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.229601 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bbc45669c-pf2c9" event={"ID":"0168beb4-ff3a-4410-9f12-8efae0d986c5","Type":"ContainerDied","Data":"ea22d60ee40f7ab0860e76b33f99fbb0fae387be6bd34aa4fb7db181cd0490b0"} Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.229741 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bbc45669c-pf2c9" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.238402 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fsxsb" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.247116 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20875278-2ac1-4f4e-bacc-016d046240fb-config\") pod \"controller-manager-5f8d6577d6-6bnv5\" (UID: \"20875278-2ac1-4f4e-bacc-016d046240fb\") " pod="openshift-controller-manager/controller-manager-5f8d6577d6-6bnv5" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.247153 4822 scope.go:117] "RemoveContainer" containerID="9ef62dad55c56d03a1c44d2be62bb19de256a848d8396dc3222e90632dcd46a7" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.247161 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/20875278-2ac1-4f4e-bacc-016d046240fb-proxy-ca-bundles\") pod \"controller-manager-5f8d6577d6-6bnv5\" (UID: \"20875278-2ac1-4f4e-bacc-016d046240fb\") " pod="openshift-controller-manager/controller-manager-5f8d6577d6-6bnv5" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.247201 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fc8c929-7ba6-470b-b4f1-33e654102c24-config\") pod \"route-controller-manager-7776888b98-lsztf\" (UID: \"7fc8c929-7ba6-470b-b4f1-33e654102c24\") " pod="openshift-route-controller-manager/route-controller-manager-7776888b98-lsztf" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.247241 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fc8c929-7ba6-470b-b4f1-33e654102c24-serving-cert\") pod \"route-controller-manager-7776888b98-lsztf\" (UID: \"7fc8c929-7ba6-470b-b4f1-33e654102c24\") " pod="openshift-route-controller-manager/route-controller-manager-7776888b98-lsztf" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.247313 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tbxm\" (UniqueName: \"kubernetes.io/projected/20875278-2ac1-4f4e-bacc-016d046240fb-kube-api-access-5tbxm\") pod \"controller-manager-5f8d6577d6-6bnv5\" (UID: \"20875278-2ac1-4f4e-bacc-016d046240fb\") " pod="openshift-controller-manager/controller-manager-5f8d6577d6-6bnv5" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.247348 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20875278-2ac1-4f4e-bacc-016d046240fb-serving-cert\") pod \"controller-manager-5f8d6577d6-6bnv5\" (UID: \"20875278-2ac1-4f4e-bacc-016d046240fb\") " pod="openshift-controller-manager/controller-manager-5f8d6577d6-6bnv5" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.247539 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpbdm\" (UniqueName: \"kubernetes.io/projected/7fc8c929-7ba6-470b-b4f1-33e654102c24-kube-api-access-lpbdm\") pod \"route-controller-manager-7776888b98-lsztf\" (UID: \"7fc8c929-7ba6-470b-b4f1-33e654102c24\") " pod="openshift-route-controller-manager/route-controller-manager-7776888b98-lsztf" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.247594 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7fc8c929-7ba6-470b-b4f1-33e654102c24-client-ca\") pod \"route-controller-manager-7776888b98-lsztf\" (UID: \"7fc8c929-7ba6-470b-b4f1-33e654102c24\") " pod="openshift-route-controller-manager/route-controller-manager-7776888b98-lsztf" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.247633 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20875278-2ac1-4f4e-bacc-016d046240fb-client-ca\") pod \"controller-manager-5f8d6577d6-6bnv5\" (UID: \"20875278-2ac1-4f4e-bacc-016d046240fb\") " pod="openshift-controller-manager/controller-manager-5f8d6577d6-6bnv5" Feb 24 09:12:12 crc kubenswrapper[4822]: E0224 09:12:12.248359 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ef62dad55c56d03a1c44d2be62bb19de256a848d8396dc3222e90632dcd46a7\": container with ID starting with 9ef62dad55c56d03a1c44d2be62bb19de256a848d8396dc3222e90632dcd46a7 not found: ID does not exist" containerID="9ef62dad55c56d03a1c44d2be62bb19de256a848d8396dc3222e90632dcd46a7" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.248386 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ef62dad55c56d03a1c44d2be62bb19de256a848d8396dc3222e90632dcd46a7"} err="failed to get container status \"9ef62dad55c56d03a1c44d2be62bb19de256a848d8396dc3222e90632dcd46a7\": rpc error: code = NotFound desc = could not find container \"9ef62dad55c56d03a1c44d2be62bb19de256a848d8396dc3222e90632dcd46a7\": container with ID starting with 9ef62dad55c56d03a1c44d2be62bb19de256a848d8396dc3222e90632dcd46a7 not found: ID does not exist" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.248404 4822 scope.go:117] "RemoveContainer" containerID="aca0b736270c26d881ff8124bbdb036b96117d98cbf28eff08248aa19b861787" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.256225 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7wplj" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.257126 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7wplj" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.282293 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65d9cb4575-7znh7"] Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.287990 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65d9cb4575-7znh7"] Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.300582 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5bbc45669c-pf2c9"] Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.301511 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fsxsb" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.303900 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5bbc45669c-pf2c9"] Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.313193 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7wplj" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.345657 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0168beb4-ff3a-4410-9f12-8efae0d986c5" path="/var/lib/kubelet/pods/0168beb4-ff3a-4410-9f12-8efae0d986c5/volumes" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.346479 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04b27ea7-139e-4839-807e-7ca727987352" path="/var/lib/kubelet/pods/04b27ea7-139e-4839-807e-7ca727987352/volumes" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.349079 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fc8c929-7ba6-470b-b4f1-33e654102c24-serving-cert\") pod \"route-controller-manager-7776888b98-lsztf\" (UID: \"7fc8c929-7ba6-470b-b4f1-33e654102c24\") " pod="openshift-route-controller-manager/route-controller-manager-7776888b98-lsztf" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.349118 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tbxm\" (UniqueName: \"kubernetes.io/projected/20875278-2ac1-4f4e-bacc-016d046240fb-kube-api-access-5tbxm\") pod \"controller-manager-5f8d6577d6-6bnv5\" (UID: \"20875278-2ac1-4f4e-bacc-016d046240fb\") " pod="openshift-controller-manager/controller-manager-5f8d6577d6-6bnv5" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.349135 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20875278-2ac1-4f4e-bacc-016d046240fb-serving-cert\") pod \"controller-manager-5f8d6577d6-6bnv5\" (UID: \"20875278-2ac1-4f4e-bacc-016d046240fb\") " pod="openshift-controller-manager/controller-manager-5f8d6577d6-6bnv5" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.349174 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpbdm\" (UniqueName: \"kubernetes.io/projected/7fc8c929-7ba6-470b-b4f1-33e654102c24-kube-api-access-lpbdm\") pod \"route-controller-manager-7776888b98-lsztf\" (UID: \"7fc8c929-7ba6-470b-b4f1-33e654102c24\") " pod="openshift-route-controller-manager/route-controller-manager-7776888b98-lsztf" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.349191 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7fc8c929-7ba6-470b-b4f1-33e654102c24-client-ca\") pod \"route-controller-manager-7776888b98-lsztf\" (UID: \"7fc8c929-7ba6-470b-b4f1-33e654102c24\") " pod="openshift-route-controller-manager/route-controller-manager-7776888b98-lsztf" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.349217 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20875278-2ac1-4f4e-bacc-016d046240fb-client-ca\") pod \"controller-manager-5f8d6577d6-6bnv5\" (UID: \"20875278-2ac1-4f4e-bacc-016d046240fb\") " pod="openshift-controller-manager/controller-manager-5f8d6577d6-6bnv5" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.349238 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20875278-2ac1-4f4e-bacc-016d046240fb-config\") pod \"controller-manager-5f8d6577d6-6bnv5\" (UID: \"20875278-2ac1-4f4e-bacc-016d046240fb\") " pod="openshift-controller-manager/controller-manager-5f8d6577d6-6bnv5" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.349254 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/20875278-2ac1-4f4e-bacc-016d046240fb-proxy-ca-bundles\") pod \"controller-manager-5f8d6577d6-6bnv5\" (UID: \"20875278-2ac1-4f4e-bacc-016d046240fb\") " pod="openshift-controller-manager/controller-manager-5f8d6577d6-6bnv5" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.349272 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fc8c929-7ba6-470b-b4f1-33e654102c24-config\") pod \"route-controller-manager-7776888b98-lsztf\" (UID: \"7fc8c929-7ba6-470b-b4f1-33e654102c24\") " pod="openshift-route-controller-manager/route-controller-manager-7776888b98-lsztf" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.350699 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7fc8c929-7ba6-470b-b4f1-33e654102c24-client-ca\") pod \"route-controller-manager-7776888b98-lsztf\" (UID: \"7fc8c929-7ba6-470b-b4f1-33e654102c24\") " pod="openshift-route-controller-manager/route-controller-manager-7776888b98-lsztf" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.354086 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20875278-2ac1-4f4e-bacc-016d046240fb-config\") pod \"controller-manager-5f8d6577d6-6bnv5\" (UID: \"20875278-2ac1-4f4e-bacc-016d046240fb\") " pod="openshift-controller-manager/controller-manager-5f8d6577d6-6bnv5" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.354586 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20875278-2ac1-4f4e-bacc-016d046240fb-client-ca\") pod \"controller-manager-5f8d6577d6-6bnv5\" (UID: \"20875278-2ac1-4f4e-bacc-016d046240fb\") " pod="openshift-controller-manager/controller-manager-5f8d6577d6-6bnv5" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.355142 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/20875278-2ac1-4f4e-bacc-016d046240fb-proxy-ca-bundles\") pod \"controller-manager-5f8d6577d6-6bnv5\" (UID: \"20875278-2ac1-4f4e-bacc-016d046240fb\") " pod="openshift-controller-manager/controller-manager-5f8d6577d6-6bnv5" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.355492 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fc8c929-7ba6-470b-b4f1-33e654102c24-config\") pod \"route-controller-manager-7776888b98-lsztf\" (UID: \"7fc8c929-7ba6-470b-b4f1-33e654102c24\") " pod="openshift-route-controller-manager/route-controller-manager-7776888b98-lsztf" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.355551 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20875278-2ac1-4f4e-bacc-016d046240fb-serving-cert\") pod \"controller-manager-5f8d6577d6-6bnv5\" (UID: \"20875278-2ac1-4f4e-bacc-016d046240fb\") " pod="openshift-controller-manager/controller-manager-5f8d6577d6-6bnv5" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.358367 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fc8c929-7ba6-470b-b4f1-33e654102c24-serving-cert\") pod \"route-controller-manager-7776888b98-lsztf\" (UID: \"7fc8c929-7ba6-470b-b4f1-33e654102c24\") " pod="openshift-route-controller-manager/route-controller-manager-7776888b98-lsztf" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.373479 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tbxm\" (UniqueName: \"kubernetes.io/projected/20875278-2ac1-4f4e-bacc-016d046240fb-kube-api-access-5tbxm\") pod \"controller-manager-5f8d6577d6-6bnv5\" (UID: \"20875278-2ac1-4f4e-bacc-016d046240fb\") " pod="openshift-controller-manager/controller-manager-5f8d6577d6-6bnv5" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.374938 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpbdm\" (UniqueName: \"kubernetes.io/projected/7fc8c929-7ba6-470b-b4f1-33e654102c24-kube-api-access-lpbdm\") pod \"route-controller-manager-7776888b98-lsztf\" (UID: \"7fc8c929-7ba6-470b-b4f1-33e654102c24\") " pod="openshift-route-controller-manager/route-controller-manager-7776888b98-lsztf" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.445870 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z68xq" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.445931 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z68xq" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.454076 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.475147 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7776888b98-lsztf" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.481956 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f8d6577d6-6bnv5" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.494567 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z68xq" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.551407 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eba5255f-d8e3-43d0-8610-b18152fdaa48-kubelet-dir\") pod \"eba5255f-d8e3-43d0-8610-b18152fdaa48\" (UID: \"eba5255f-d8e3-43d0-8610-b18152fdaa48\") " Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.551490 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eba5255f-d8e3-43d0-8610-b18152fdaa48-kube-api-access\") pod \"eba5255f-d8e3-43d0-8610-b18152fdaa48\" (UID: \"eba5255f-d8e3-43d0-8610-b18152fdaa48\") " Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.551536 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eba5255f-d8e3-43d0-8610-b18152fdaa48-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "eba5255f-d8e3-43d0-8610-b18152fdaa48" (UID: "eba5255f-d8e3-43d0-8610-b18152fdaa48"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.551697 4822 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eba5255f-d8e3-43d0-8610-b18152fdaa48-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.554888 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eba5255f-d8e3-43d0-8610-b18152fdaa48-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "eba5255f-d8e3-43d0-8610-b18152fdaa48" (UID: "eba5255f-d8e3-43d0-8610-b18152fdaa48"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.652459 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-56bwc" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.652806 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eba5255f-d8e3-43d0-8610-b18152fdaa48-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.652884 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-56bwc" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.697948 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-56bwc" Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.710104 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f8d6577d6-6bnv5"] Feb 24 09:12:12 crc kubenswrapper[4822]: I0224 09:12:12.751425 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7776888b98-lsztf"] Feb 24 09:12:13 crc kubenswrapper[4822]: I0224 09:12:13.239281 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7776888b98-lsztf" event={"ID":"7fc8c929-7ba6-470b-b4f1-33e654102c24","Type":"ContainerStarted","Data":"b7642a5404403f72a5a9499b3963dca5cf363e72380c70dc675a054a8db7af73"} Feb 24 09:12:13 crc kubenswrapper[4822]: I0224 09:12:13.240897 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7776888b98-lsztf" event={"ID":"7fc8c929-7ba6-470b-b4f1-33e654102c24","Type":"ContainerStarted","Data":"6434e6fc7356a4255b810b0e1d1af375e4f58b716c191d2413a072e93182ddfb"} Feb 24 09:12:13 crc kubenswrapper[4822]: I0224 09:12:13.242765 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7776888b98-lsztf" Feb 24 09:12:13 crc kubenswrapper[4822]: I0224 09:12:13.251879 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f8d6577d6-6bnv5" event={"ID":"20875278-2ac1-4f4e-bacc-016d046240fb","Type":"ContainerStarted","Data":"ddb33c5e58b6fbec36fbaeafa4ea2e1b0659d8c4aa5b4b549a0d125cee2050df"} Feb 24 09:12:13 crc kubenswrapper[4822]: I0224 09:12:13.252091 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f8d6577d6-6bnv5" event={"ID":"20875278-2ac1-4f4e-bacc-016d046240fb","Type":"ContainerStarted","Data":"64cc3b390a3f5ac2b9d92601daf05a76ffef2c541cbb81a3a6d076be20e97030"} Feb 24 09:12:13 crc kubenswrapper[4822]: I0224 09:12:13.252223 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5f8d6577d6-6bnv5" Feb 24 09:12:13 crc kubenswrapper[4822]: I0224 09:12:13.255127 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 09:12:13 crc kubenswrapper[4822]: I0224 09:12:13.255185 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"eba5255f-d8e3-43d0-8610-b18152fdaa48","Type":"ContainerDied","Data":"5e6b4933c7158645fdf6aeecb085e5fa230f648aa69480f552d02ce5d1703a0d"} Feb 24 09:12:13 crc kubenswrapper[4822]: I0224 09:12:13.255273 4822 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e6b4933c7158645fdf6aeecb085e5fa230f648aa69480f552d02ce5d1703a0d" Feb 24 09:12:13 crc kubenswrapper[4822]: I0224 09:12:13.259293 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5f8d6577d6-6bnv5" Feb 24 09:12:13 crc kubenswrapper[4822]: I0224 09:12:13.275828 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7776888b98-lsztf" podStartSLOduration=2.275802201 podStartE2EDuration="2.275802201s" podCreationTimestamp="2026-02-24 09:12:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:12:13.275761429 +0000 UTC m=+255.663523977" watchObservedRunningTime="2026-02-24 09:12:13.275802201 +0000 UTC m=+255.663564749" Feb 24 09:12:13 crc kubenswrapper[4822]: I0224 09:12:13.308857 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5f8d6577d6-6bnv5" podStartSLOduration=3.308828843 podStartE2EDuration="3.308828843s" podCreationTimestamp="2026-02-24 09:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:12:13.308794252 +0000 UTC m=+255.696556800" watchObservedRunningTime="2026-02-24 09:12:13.308828843 +0000 UTC m=+255.696591391" Feb 24 09:12:13 crc kubenswrapper[4822]: I0224 09:12:13.311529 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7wplj" Feb 24 09:12:13 crc kubenswrapper[4822]: I0224 09:12:13.328230 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z68xq" Feb 24 09:12:13 crc kubenswrapper[4822]: I0224 09:12:13.330338 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7776888b98-lsztf" Feb 24 09:12:14 crc kubenswrapper[4822]: I0224 09:12:14.231502 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jm48n" Feb 24 09:12:14 crc kubenswrapper[4822]: I0224 09:12:14.231674 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jm48n" Feb 24 09:12:14 crc kubenswrapper[4822]: I0224 09:12:14.290386 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jm48n" Feb 24 09:12:14 crc kubenswrapper[4822]: I0224 09:12:14.312362 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-56bwc" Feb 24 09:12:14 crc kubenswrapper[4822]: I0224 09:12:14.662765 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lcvsl" Feb 24 09:12:14 crc kubenswrapper[4822]: I0224 09:12:14.662814 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lcvsl" Feb 24 09:12:14 crc kubenswrapper[4822]: I0224 09:12:14.713082 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 24 09:12:14 crc kubenswrapper[4822]: E0224 09:12:14.713332 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eba5255f-d8e3-43d0-8610-b18152fdaa48" containerName="pruner" Feb 24 09:12:14 crc kubenswrapper[4822]: I0224 09:12:14.713346 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="eba5255f-d8e3-43d0-8610-b18152fdaa48" containerName="pruner" Feb 24 09:12:14 crc kubenswrapper[4822]: I0224 09:12:14.713466 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="eba5255f-d8e3-43d0-8610-b18152fdaa48" containerName="pruner" Feb 24 09:12:14 crc kubenswrapper[4822]: I0224 09:12:14.713872 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 24 09:12:14 crc kubenswrapper[4822]: I0224 09:12:14.716253 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 24 09:12:14 crc kubenswrapper[4822]: I0224 09:12:14.716526 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 24 09:12:14 crc kubenswrapper[4822]: I0224 09:12:14.725906 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 24 09:12:14 crc kubenswrapper[4822]: I0224 09:12:14.761759 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lcvsl" Feb 24 09:12:14 crc kubenswrapper[4822]: I0224 09:12:14.780588 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7fe659d3-9cdb-43c9-8ce6-b5be02903c5e-kube-api-access\") pod \"installer-9-crc\" (UID: \"7fe659d3-9cdb-43c9-8ce6-b5be02903c5e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 09:12:14 crc kubenswrapper[4822]: I0224 09:12:14.780642 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7fe659d3-9cdb-43c9-8ce6-b5be02903c5e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7fe659d3-9cdb-43c9-8ce6-b5be02903c5e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 09:12:14 crc kubenswrapper[4822]: I0224 09:12:14.780745 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7fe659d3-9cdb-43c9-8ce6-b5be02903c5e-var-lock\") pod \"installer-9-crc\" (UID: \"7fe659d3-9cdb-43c9-8ce6-b5be02903c5e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 09:12:14 crc kubenswrapper[4822]: I0224 09:12:14.881293 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7fe659d3-9cdb-43c9-8ce6-b5be02903c5e-kube-api-access\") pod \"installer-9-crc\" (UID: \"7fe659d3-9cdb-43c9-8ce6-b5be02903c5e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 09:12:14 crc kubenswrapper[4822]: I0224 09:12:14.881351 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7fe659d3-9cdb-43c9-8ce6-b5be02903c5e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7fe659d3-9cdb-43c9-8ce6-b5be02903c5e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 09:12:14 crc kubenswrapper[4822]: I0224 09:12:14.881394 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7fe659d3-9cdb-43c9-8ce6-b5be02903c5e-var-lock\") pod \"installer-9-crc\" (UID: \"7fe659d3-9cdb-43c9-8ce6-b5be02903c5e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 09:12:14 crc kubenswrapper[4822]: I0224 09:12:14.881475 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7fe659d3-9cdb-43c9-8ce6-b5be02903c5e-var-lock\") pod \"installer-9-crc\" (UID: \"7fe659d3-9cdb-43c9-8ce6-b5be02903c5e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 09:12:14 crc kubenswrapper[4822]: I0224 09:12:14.881785 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7fe659d3-9cdb-43c9-8ce6-b5be02903c5e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7fe659d3-9cdb-43c9-8ce6-b5be02903c5e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 09:12:14 crc kubenswrapper[4822]: I0224 09:12:14.902629 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7fe659d3-9cdb-43c9-8ce6-b5be02903c5e-kube-api-access\") pod \"installer-9-crc\" (UID: \"7fe659d3-9cdb-43c9-8ce6-b5be02903c5e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 09:12:15 crc kubenswrapper[4822]: I0224 09:12:15.073278 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 24 09:12:15 crc kubenswrapper[4822]: I0224 09:12:15.317037 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jm48n" Feb 24 09:12:15 crc kubenswrapper[4822]: I0224 09:12:15.332678 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lcvsl" Feb 24 09:12:15 crc kubenswrapper[4822]: I0224 09:12:15.355503 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xhmth" Feb 24 09:12:15 crc kubenswrapper[4822]: I0224 09:12:15.356587 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xhmth" Feb 24 09:12:15 crc kubenswrapper[4822]: I0224 09:12:15.580659 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-56bwc"] Feb 24 09:12:15 crc kubenswrapper[4822]: I0224 09:12:15.629775 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 24 09:12:15 crc kubenswrapper[4822]: I0224 09:12:15.674479 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tmzzx" Feb 24 09:12:15 crc kubenswrapper[4822]: I0224 09:12:15.674828 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tmzzx" Feb 24 09:12:15 crc kubenswrapper[4822]: I0224 09:12:15.676596 4822 patch_prober.go:28] interesting pod/machine-config-daemon-qd752 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:12:15 crc kubenswrapper[4822]: I0224 09:12:15.676690 4822 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:12:16 crc kubenswrapper[4822]: I0224 09:12:16.274865 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7fe659d3-9cdb-43c9-8ce6-b5be02903c5e","Type":"ContainerStarted","Data":"6c51dd2f303375c4d761e28770a635e5f8aa091551ecd26ac1dcc27ad6d54e01"} Feb 24 09:12:16 crc kubenswrapper[4822]: I0224 09:12:16.275171 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7fe659d3-9cdb-43c9-8ce6-b5be02903c5e","Type":"ContainerStarted","Data":"9d43b3a9ef8e0f440aeb1f9234793d621641f4ac2a5062e6304865c628d6644c"} Feb 24 09:12:16 crc kubenswrapper[4822]: I0224 09:12:16.275259 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-56bwc" podUID="2a5b1d8c-f894-4c8a-b82b-052aa58260e2" containerName="registry-server" containerID="cri-o://3f0b0c59cc8335f49603e37dcd092f74422df037ac2ba5cd1a7c318d4167763c" gracePeriod=2 Feb 24 09:12:16 crc kubenswrapper[4822]: I0224 09:12:16.403655 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xhmth" podUID="70973b60-6421-4c72-b5ba-b5ad78d060e7" containerName="registry-server" probeResult="failure" output=< Feb 24 09:12:16 crc kubenswrapper[4822]: timeout: failed to connect service ":50051" within 1s Feb 24 09:12:16 crc kubenswrapper[4822]: > Feb 24 09:12:16 crc kubenswrapper[4822]: I0224 09:12:16.683103 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-56bwc" Feb 24 09:12:16 crc kubenswrapper[4822]: I0224 09:12:16.710265 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-475z7\" (UniqueName: \"kubernetes.io/projected/2a5b1d8c-f894-4c8a-b82b-052aa58260e2-kube-api-access-475z7\") pod \"2a5b1d8c-f894-4c8a-b82b-052aa58260e2\" (UID: \"2a5b1d8c-f894-4c8a-b82b-052aa58260e2\") " Feb 24 09:12:16 crc kubenswrapper[4822]: I0224 09:12:16.710355 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a5b1d8c-f894-4c8a-b82b-052aa58260e2-utilities\") pod \"2a5b1d8c-f894-4c8a-b82b-052aa58260e2\" (UID: \"2a5b1d8c-f894-4c8a-b82b-052aa58260e2\") " Feb 24 09:12:16 crc kubenswrapper[4822]: I0224 09:12:16.710400 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a5b1d8c-f894-4c8a-b82b-052aa58260e2-catalog-content\") pod \"2a5b1d8c-f894-4c8a-b82b-052aa58260e2\" (UID: \"2a5b1d8c-f894-4c8a-b82b-052aa58260e2\") " Feb 24 09:12:16 crc kubenswrapper[4822]: I0224 09:12:16.711200 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a5b1d8c-f894-4c8a-b82b-052aa58260e2-utilities" (OuterVolumeSpecName: "utilities") pod "2a5b1d8c-f894-4c8a-b82b-052aa58260e2" (UID: "2a5b1d8c-f894-4c8a-b82b-052aa58260e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:12:16 crc kubenswrapper[4822]: I0224 09:12:16.719557 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a5b1d8c-f894-4c8a-b82b-052aa58260e2-kube-api-access-475z7" (OuterVolumeSpecName: "kube-api-access-475z7") pod "2a5b1d8c-f894-4c8a-b82b-052aa58260e2" (UID: "2a5b1d8c-f894-4c8a-b82b-052aa58260e2"). InnerVolumeSpecName "kube-api-access-475z7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:12:16 crc kubenswrapper[4822]: I0224 09:12:16.742238 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tmzzx" podUID="bb158e82-c2df-449e-a2ad-a73731f5965b" containerName="registry-server" probeResult="failure" output=< Feb 24 09:12:16 crc kubenswrapper[4822]: timeout: failed to connect service ":50051" within 1s Feb 24 09:12:16 crc kubenswrapper[4822]: > Feb 24 09:12:16 crc kubenswrapper[4822]: I0224 09:12:16.770515 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a5b1d8c-f894-4c8a-b82b-052aa58260e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2a5b1d8c-f894-4c8a-b82b-052aa58260e2" (UID: "2a5b1d8c-f894-4c8a-b82b-052aa58260e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:12:16 crc kubenswrapper[4822]: I0224 09:12:16.813784 4822 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a5b1d8c-f894-4c8a-b82b-052aa58260e2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:16 crc kubenswrapper[4822]: I0224 09:12:16.813825 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-475z7\" (UniqueName: \"kubernetes.io/projected/2a5b1d8c-f894-4c8a-b82b-052aa58260e2-kube-api-access-475z7\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:16 crc kubenswrapper[4822]: I0224 09:12:16.813843 4822 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a5b1d8c-f894-4c8a-b82b-052aa58260e2-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:17 crc kubenswrapper[4822]: I0224 09:12:17.284835 4822 generic.go:334] "Generic (PLEG): container finished" podID="2a5b1d8c-f894-4c8a-b82b-052aa58260e2" containerID="3f0b0c59cc8335f49603e37dcd092f74422df037ac2ba5cd1a7c318d4167763c" exitCode=0 Feb 24 09:12:17 crc kubenswrapper[4822]: I0224 09:12:17.284947 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-56bwc" Feb 24 09:12:17 crc kubenswrapper[4822]: I0224 09:12:17.284961 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-56bwc" event={"ID":"2a5b1d8c-f894-4c8a-b82b-052aa58260e2","Type":"ContainerDied","Data":"3f0b0c59cc8335f49603e37dcd092f74422df037ac2ba5cd1a7c318d4167763c"} Feb 24 09:12:17 crc kubenswrapper[4822]: I0224 09:12:17.285041 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-56bwc" event={"ID":"2a5b1d8c-f894-4c8a-b82b-052aa58260e2","Type":"ContainerDied","Data":"c05a6e9277c2505db61fcb3a3ed7f61ea6fbd534bc87e30ed23295af9b358855"} Feb 24 09:12:17 crc kubenswrapper[4822]: I0224 09:12:17.285072 4822 scope.go:117] "RemoveContainer" containerID="3f0b0c59cc8335f49603e37dcd092f74422df037ac2ba5cd1a7c318d4167763c" Feb 24 09:12:17 crc kubenswrapper[4822]: I0224 09:12:17.304809 4822 scope.go:117] "RemoveContainer" containerID="47e09fba63bd6a559d7d81deef538d408a7da5f090e93e7a2c534719ecac5e3e" Feb 24 09:12:17 crc kubenswrapper[4822]: I0224 09:12:17.304858 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.304836433 podStartE2EDuration="3.304836433s" podCreationTimestamp="2026-02-24 09:12:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:12:17.300842082 +0000 UTC m=+259.688604670" watchObservedRunningTime="2026-02-24 09:12:17.304836433 +0000 UTC m=+259.692599021" Feb 24 09:12:17 crc kubenswrapper[4822]: I0224 09:12:17.327333 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-56bwc"] Feb 24 09:12:17 crc kubenswrapper[4822]: I0224 09:12:17.333100 4822 scope.go:117] "RemoveContainer" containerID="a4533d913baec61364af948c6afe4c73be157f097d1d214b4a525a3d1ef2fc9b" Feb 24 09:12:17 crc kubenswrapper[4822]: I0224 09:12:17.334155 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-56bwc"] Feb 24 09:12:17 crc kubenswrapper[4822]: I0224 09:12:17.340347 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z68xq"] Feb 24 09:12:17 crc kubenswrapper[4822]: I0224 09:12:17.340668 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z68xq" podUID="46112603-12a1-4bde-8442-c9675eb2c5f0" containerName="registry-server" containerID="cri-o://a895d078ba6901f19797a44ca503267bc6b76bda4f45d736dbcddabf898a94a3" gracePeriod=2 Feb 24 09:12:17 crc kubenswrapper[4822]: I0224 09:12:17.350470 4822 scope.go:117] "RemoveContainer" containerID="3f0b0c59cc8335f49603e37dcd092f74422df037ac2ba5cd1a7c318d4167763c" Feb 24 09:12:17 crc kubenswrapper[4822]: E0224 09:12:17.350739 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f0b0c59cc8335f49603e37dcd092f74422df037ac2ba5cd1a7c318d4167763c\": container with ID starting with 3f0b0c59cc8335f49603e37dcd092f74422df037ac2ba5cd1a7c318d4167763c not found: ID does not exist" containerID="3f0b0c59cc8335f49603e37dcd092f74422df037ac2ba5cd1a7c318d4167763c" Feb 24 09:12:17 crc kubenswrapper[4822]: I0224 09:12:17.350783 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f0b0c59cc8335f49603e37dcd092f74422df037ac2ba5cd1a7c318d4167763c"} err="failed to get container status \"3f0b0c59cc8335f49603e37dcd092f74422df037ac2ba5cd1a7c318d4167763c\": rpc error: code = NotFound desc = could not find container \"3f0b0c59cc8335f49603e37dcd092f74422df037ac2ba5cd1a7c318d4167763c\": container with ID starting with 3f0b0c59cc8335f49603e37dcd092f74422df037ac2ba5cd1a7c318d4167763c not found: ID does not exist" Feb 24 09:12:17 crc kubenswrapper[4822]: I0224 09:12:17.350804 4822 scope.go:117] "RemoveContainer" containerID="47e09fba63bd6a559d7d81deef538d408a7da5f090e93e7a2c534719ecac5e3e" Feb 24 09:12:17 crc kubenswrapper[4822]: E0224 09:12:17.351155 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47e09fba63bd6a559d7d81deef538d408a7da5f090e93e7a2c534719ecac5e3e\": container with ID starting with 47e09fba63bd6a559d7d81deef538d408a7da5f090e93e7a2c534719ecac5e3e not found: ID does not exist" containerID="47e09fba63bd6a559d7d81deef538d408a7da5f090e93e7a2c534719ecac5e3e" Feb 24 09:12:17 crc kubenswrapper[4822]: I0224 09:12:17.351204 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47e09fba63bd6a559d7d81deef538d408a7da5f090e93e7a2c534719ecac5e3e"} err="failed to get container status \"47e09fba63bd6a559d7d81deef538d408a7da5f090e93e7a2c534719ecac5e3e\": rpc error: code = NotFound desc = could not find container \"47e09fba63bd6a559d7d81deef538d408a7da5f090e93e7a2c534719ecac5e3e\": container with ID starting with 47e09fba63bd6a559d7d81deef538d408a7da5f090e93e7a2c534719ecac5e3e not found: ID does not exist" Feb 24 09:12:17 crc kubenswrapper[4822]: I0224 09:12:17.351235 4822 scope.go:117] "RemoveContainer" containerID="a4533d913baec61364af948c6afe4c73be157f097d1d214b4a525a3d1ef2fc9b" Feb 24 09:12:17 crc kubenswrapper[4822]: E0224 09:12:17.351753 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4533d913baec61364af948c6afe4c73be157f097d1d214b4a525a3d1ef2fc9b\": container with ID starting with a4533d913baec61364af948c6afe4c73be157f097d1d214b4a525a3d1ef2fc9b not found: ID does not exist" containerID="a4533d913baec61364af948c6afe4c73be157f097d1d214b4a525a3d1ef2fc9b" Feb 24 09:12:17 crc kubenswrapper[4822]: I0224 09:12:17.351781 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4533d913baec61364af948c6afe4c73be157f097d1d214b4a525a3d1ef2fc9b"} err="failed to get container status \"a4533d913baec61364af948c6afe4c73be157f097d1d214b4a525a3d1ef2fc9b\": rpc error: code = NotFound desc = could not find container \"a4533d913baec61364af948c6afe4c73be157f097d1d214b4a525a3d1ef2fc9b\": container with ID starting with a4533d913baec61364af948c6afe4c73be157f097d1d214b4a525a3d1ef2fc9b not found: ID does not exist" Feb 24 09:12:17 crc kubenswrapper[4822]: I0224 09:12:17.790636 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z68xq" Feb 24 09:12:17 crc kubenswrapper[4822]: I0224 09:12:17.825486 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46112603-12a1-4bde-8442-c9675eb2c5f0-utilities\") pod \"46112603-12a1-4bde-8442-c9675eb2c5f0\" (UID: \"46112603-12a1-4bde-8442-c9675eb2c5f0\") " Feb 24 09:12:17 crc kubenswrapper[4822]: I0224 09:12:17.825595 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzzm7\" (UniqueName: \"kubernetes.io/projected/46112603-12a1-4bde-8442-c9675eb2c5f0-kube-api-access-xzzm7\") pod \"46112603-12a1-4bde-8442-c9675eb2c5f0\" (UID: \"46112603-12a1-4bde-8442-c9675eb2c5f0\") " Feb 24 09:12:17 crc kubenswrapper[4822]: I0224 09:12:17.825640 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46112603-12a1-4bde-8442-c9675eb2c5f0-catalog-content\") pod \"46112603-12a1-4bde-8442-c9675eb2c5f0\" (UID: \"46112603-12a1-4bde-8442-c9675eb2c5f0\") " Feb 24 09:12:17 crc kubenswrapper[4822]: I0224 09:12:17.827128 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46112603-12a1-4bde-8442-c9675eb2c5f0-utilities" (OuterVolumeSpecName: "utilities") pod "46112603-12a1-4bde-8442-c9675eb2c5f0" (UID: "46112603-12a1-4bde-8442-c9675eb2c5f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:12:17 crc kubenswrapper[4822]: I0224 09:12:17.829202 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46112603-12a1-4bde-8442-c9675eb2c5f0-kube-api-access-xzzm7" (OuterVolumeSpecName: "kube-api-access-xzzm7") pod "46112603-12a1-4bde-8442-c9675eb2c5f0" (UID: "46112603-12a1-4bde-8442-c9675eb2c5f0"). InnerVolumeSpecName "kube-api-access-xzzm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:12:17 crc kubenswrapper[4822]: I0224 09:12:17.875643 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46112603-12a1-4bde-8442-c9675eb2c5f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46112603-12a1-4bde-8442-c9675eb2c5f0" (UID: "46112603-12a1-4bde-8442-c9675eb2c5f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:12:17 crc kubenswrapper[4822]: I0224 09:12:17.926872 4822 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46112603-12a1-4bde-8442-c9675eb2c5f0-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:17 crc kubenswrapper[4822]: I0224 09:12:17.926907 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzzm7\" (UniqueName: \"kubernetes.io/projected/46112603-12a1-4bde-8442-c9675eb2c5f0-kube-api-access-xzzm7\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:17 crc kubenswrapper[4822]: I0224 09:12:17.926945 4822 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46112603-12a1-4bde-8442-c9675eb2c5f0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:17 crc kubenswrapper[4822]: I0224 09:12:17.933438 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lcvsl"] Feb 24 09:12:17 crc kubenswrapper[4822]: I0224 09:12:17.933767 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lcvsl" podUID="02875d0b-b186-4481-bf06-923e1d91f53f" containerName="registry-server" containerID="cri-o://53027f624371cf0a9591ded2e469fb596208ceab57292522484c3323b029a3af" gracePeriod=2 Feb 24 09:12:18 crc kubenswrapper[4822]: I0224 09:12:18.295592 4822 generic.go:334] "Generic (PLEG): container finished" podID="02875d0b-b186-4481-bf06-923e1d91f53f" containerID="53027f624371cf0a9591ded2e469fb596208ceab57292522484c3323b029a3af" exitCode=0 Feb 24 09:12:18 crc kubenswrapper[4822]: I0224 09:12:18.296189 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcvsl" event={"ID":"02875d0b-b186-4481-bf06-923e1d91f53f","Type":"ContainerDied","Data":"53027f624371cf0a9591ded2e469fb596208ceab57292522484c3323b029a3af"} Feb 24 09:12:18 crc kubenswrapper[4822]: I0224 09:12:18.300740 4822 generic.go:334] "Generic (PLEG): container finished" podID="46112603-12a1-4bde-8442-c9675eb2c5f0" containerID="a895d078ba6901f19797a44ca503267bc6b76bda4f45d736dbcddabf898a94a3" exitCode=0 Feb 24 09:12:18 crc kubenswrapper[4822]: I0224 09:12:18.300858 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z68xq" event={"ID":"46112603-12a1-4bde-8442-c9675eb2c5f0","Type":"ContainerDied","Data":"a895d078ba6901f19797a44ca503267bc6b76bda4f45d736dbcddabf898a94a3"} Feb 24 09:12:18 crc kubenswrapper[4822]: I0224 09:12:18.300945 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z68xq" event={"ID":"46112603-12a1-4bde-8442-c9675eb2c5f0","Type":"ContainerDied","Data":"50c7039a94fb9bdc94700fe12b7d4bd7a446910f38720fc4fb8a88bbc4aa9ad2"} Feb 24 09:12:18 crc kubenswrapper[4822]: I0224 09:12:18.300985 4822 scope.go:117] "RemoveContainer" containerID="a895d078ba6901f19797a44ca503267bc6b76bda4f45d736dbcddabf898a94a3" Feb 24 09:12:18 crc kubenswrapper[4822]: I0224 09:12:18.301129 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z68xq" Feb 24 09:12:18 crc kubenswrapper[4822]: I0224 09:12:18.338396 4822 scope.go:117] "RemoveContainer" containerID="9e8468beeae009df685ad3be77ab1c112a2f05f7417c48b3927b50f73d4f4dd1" Feb 24 09:12:18 crc kubenswrapper[4822]: I0224 09:12:18.359269 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a5b1d8c-f894-4c8a-b82b-052aa58260e2" path="/var/lib/kubelet/pods/2a5b1d8c-f894-4c8a-b82b-052aa58260e2/volumes" Feb 24 09:12:18 crc kubenswrapper[4822]: I0224 09:12:18.359766 4822 scope.go:117] "RemoveContainer" containerID="89e67f5d5e9edab40de6e1ef0e52e0e6ef47ed061b0a21899f47145674d4a4f7" Feb 24 09:12:18 crc kubenswrapper[4822]: I0224 09:12:18.359843 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z68xq"] Feb 24 09:12:18 crc kubenswrapper[4822]: I0224 09:12:18.359866 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z68xq"] Feb 24 09:12:18 crc kubenswrapper[4822]: I0224 09:12:18.375230 4822 scope.go:117] "RemoveContainer" containerID="a895d078ba6901f19797a44ca503267bc6b76bda4f45d736dbcddabf898a94a3" Feb 24 09:12:18 crc kubenswrapper[4822]: E0224 09:12:18.375548 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a895d078ba6901f19797a44ca503267bc6b76bda4f45d736dbcddabf898a94a3\": container with ID starting with a895d078ba6901f19797a44ca503267bc6b76bda4f45d736dbcddabf898a94a3 not found: ID does not exist" containerID="a895d078ba6901f19797a44ca503267bc6b76bda4f45d736dbcddabf898a94a3" Feb 24 09:12:18 crc kubenswrapper[4822]: I0224 09:12:18.375580 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a895d078ba6901f19797a44ca503267bc6b76bda4f45d736dbcddabf898a94a3"} err="failed to get container status \"a895d078ba6901f19797a44ca503267bc6b76bda4f45d736dbcddabf898a94a3\": rpc error: code = NotFound desc = could not find container \"a895d078ba6901f19797a44ca503267bc6b76bda4f45d736dbcddabf898a94a3\": container with ID starting with a895d078ba6901f19797a44ca503267bc6b76bda4f45d736dbcddabf898a94a3 not found: ID does not exist" Feb 24 09:12:18 crc kubenswrapper[4822]: I0224 09:12:18.375600 4822 scope.go:117] "RemoveContainer" containerID="9e8468beeae009df685ad3be77ab1c112a2f05f7417c48b3927b50f73d4f4dd1" Feb 24 09:12:18 crc kubenswrapper[4822]: E0224 09:12:18.375806 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e8468beeae009df685ad3be77ab1c112a2f05f7417c48b3927b50f73d4f4dd1\": container with ID starting with 9e8468beeae009df685ad3be77ab1c112a2f05f7417c48b3927b50f73d4f4dd1 not found: ID does not exist" containerID="9e8468beeae009df685ad3be77ab1c112a2f05f7417c48b3927b50f73d4f4dd1" Feb 24 09:12:18 crc kubenswrapper[4822]: I0224 09:12:18.375827 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e8468beeae009df685ad3be77ab1c112a2f05f7417c48b3927b50f73d4f4dd1"} err="failed to get container status \"9e8468beeae009df685ad3be77ab1c112a2f05f7417c48b3927b50f73d4f4dd1\": rpc error: code = NotFound desc = could not find container \"9e8468beeae009df685ad3be77ab1c112a2f05f7417c48b3927b50f73d4f4dd1\": container with ID starting with 9e8468beeae009df685ad3be77ab1c112a2f05f7417c48b3927b50f73d4f4dd1 not found: ID does not exist" Feb 24 09:12:18 crc kubenswrapper[4822]: I0224 09:12:18.375841 4822 scope.go:117] "RemoveContainer" containerID="89e67f5d5e9edab40de6e1ef0e52e0e6ef47ed061b0a21899f47145674d4a4f7" Feb 24 09:12:18 crc kubenswrapper[4822]: E0224 09:12:18.376130 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89e67f5d5e9edab40de6e1ef0e52e0e6ef47ed061b0a21899f47145674d4a4f7\": container with ID starting with 89e67f5d5e9edab40de6e1ef0e52e0e6ef47ed061b0a21899f47145674d4a4f7 not found: ID does not exist" containerID="89e67f5d5e9edab40de6e1ef0e52e0e6ef47ed061b0a21899f47145674d4a4f7" Feb 24 09:12:18 crc kubenswrapper[4822]: I0224 09:12:18.376150 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89e67f5d5e9edab40de6e1ef0e52e0e6ef47ed061b0a21899f47145674d4a4f7"} err="failed to get container status \"89e67f5d5e9edab40de6e1ef0e52e0e6ef47ed061b0a21899f47145674d4a4f7\": rpc error: code = NotFound desc = could not find container \"89e67f5d5e9edab40de6e1ef0e52e0e6ef47ed061b0a21899f47145674d4a4f7\": container with ID starting with 89e67f5d5e9edab40de6e1ef0e52e0e6ef47ed061b0a21899f47145674d4a4f7 not found: ID does not exist" Feb 24 09:12:18 crc kubenswrapper[4822]: I0224 09:12:18.426481 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lcvsl" Feb 24 09:12:18 crc kubenswrapper[4822]: I0224 09:12:18.535190 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02875d0b-b186-4481-bf06-923e1d91f53f-catalog-content\") pod \"02875d0b-b186-4481-bf06-923e1d91f53f\" (UID: \"02875d0b-b186-4481-bf06-923e1d91f53f\") " Feb 24 09:12:18 crc kubenswrapper[4822]: I0224 09:12:18.535299 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9dm8\" (UniqueName: \"kubernetes.io/projected/02875d0b-b186-4481-bf06-923e1d91f53f-kube-api-access-w9dm8\") pod \"02875d0b-b186-4481-bf06-923e1d91f53f\" (UID: \"02875d0b-b186-4481-bf06-923e1d91f53f\") " Feb 24 09:12:18 crc kubenswrapper[4822]: I0224 09:12:18.535336 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02875d0b-b186-4481-bf06-923e1d91f53f-utilities\") pod \"02875d0b-b186-4481-bf06-923e1d91f53f\" (UID: \"02875d0b-b186-4481-bf06-923e1d91f53f\") " Feb 24 09:12:18 crc kubenswrapper[4822]: I0224 09:12:18.536248 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02875d0b-b186-4481-bf06-923e1d91f53f-utilities" (OuterVolumeSpecName: "utilities") pod "02875d0b-b186-4481-bf06-923e1d91f53f" (UID: "02875d0b-b186-4481-bf06-923e1d91f53f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:12:18 crc kubenswrapper[4822]: I0224 09:12:18.540344 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02875d0b-b186-4481-bf06-923e1d91f53f-kube-api-access-w9dm8" (OuterVolumeSpecName: "kube-api-access-w9dm8") pod "02875d0b-b186-4481-bf06-923e1d91f53f" (UID: "02875d0b-b186-4481-bf06-923e1d91f53f"). InnerVolumeSpecName "kube-api-access-w9dm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:12:18 crc kubenswrapper[4822]: I0224 09:12:18.558085 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02875d0b-b186-4481-bf06-923e1d91f53f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02875d0b-b186-4481-bf06-923e1d91f53f" (UID: "02875d0b-b186-4481-bf06-923e1d91f53f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:12:18 crc kubenswrapper[4822]: I0224 09:12:18.637133 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9dm8\" (UniqueName: \"kubernetes.io/projected/02875d0b-b186-4481-bf06-923e1d91f53f-kube-api-access-w9dm8\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:18 crc kubenswrapper[4822]: I0224 09:12:18.637192 4822 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02875d0b-b186-4481-bf06-923e1d91f53f-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:18 crc kubenswrapper[4822]: I0224 09:12:18.637223 4822 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02875d0b-b186-4481-bf06-923e1d91f53f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:19 crc kubenswrapper[4822]: I0224 09:12:19.312360 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lcvsl" Feb 24 09:12:19 crc kubenswrapper[4822]: I0224 09:12:19.312335 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcvsl" event={"ID":"02875d0b-b186-4481-bf06-923e1d91f53f","Type":"ContainerDied","Data":"2ecc4343792786b583eb63364d49560f10e29842939b7795ea89bd4740100712"} Feb 24 09:12:19 crc kubenswrapper[4822]: I0224 09:12:19.313007 4822 scope.go:117] "RemoveContainer" containerID="53027f624371cf0a9591ded2e469fb596208ceab57292522484c3323b029a3af" Feb 24 09:12:19 crc kubenswrapper[4822]: I0224 09:12:19.336724 4822 scope.go:117] "RemoveContainer" containerID="87c2a4ada017ce6469931ab478c3282afb4e87d9c134c3878fa6ff967bc55abe" Feb 24 09:12:19 crc kubenswrapper[4822]: I0224 09:12:19.354363 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lcvsl"] Feb 24 09:12:19 crc kubenswrapper[4822]: I0224 09:12:19.359813 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lcvsl"] Feb 24 09:12:19 crc kubenswrapper[4822]: I0224 09:12:19.374519 4822 scope.go:117] "RemoveContainer" containerID="80fea790ba8ef5802df83d7f33694fd8391dd56406109024313ba18b3a2dfd8c" Feb 24 09:12:20 crc kubenswrapper[4822]: I0224 09:12:20.347080 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02875d0b-b186-4481-bf06-923e1d91f53f" path="/var/lib/kubelet/pods/02875d0b-b186-4481-bf06-923e1d91f53f/volumes" Feb 24 09:12:20 crc kubenswrapper[4822]: I0224 09:12:20.348582 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46112603-12a1-4bde-8442-c9675eb2c5f0" path="/var/lib/kubelet/pods/46112603-12a1-4bde-8442-c9675eb2c5f0/volumes" Feb 24 09:12:25 crc kubenswrapper[4822]: I0224 09:12:25.425633 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xhmth" Feb 24 09:12:25 crc kubenswrapper[4822]: I0224 09:12:25.479120 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xhmth" Feb 24 09:12:25 crc kubenswrapper[4822]: I0224 09:12:25.737580 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tmzzx" Feb 24 09:12:25 crc kubenswrapper[4822]: I0224 09:12:25.811711 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tmzzx" Feb 24 09:12:28 crc kubenswrapper[4822]: I0224 09:12:28.934317 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tmzzx"] Feb 24 09:12:28 crc kubenswrapper[4822]: I0224 09:12:28.935013 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tmzzx" podUID="bb158e82-c2df-449e-a2ad-a73731f5965b" containerName="registry-server" containerID="cri-o://4c03751fc76eb0bb481b09fcfb35582f3f3639658b34add5eb47204b3de03ce7" gracePeriod=2 Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.051280 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-vct48" podUID="751981f5-4bd9-42fd-888e-2407c6a197ca" containerName="oauth-openshift" containerID="cri-o://2f7d2645e8295c8ada1025aadba9b33ca3b003c0bffc24fc2e256afad93f1fcb" gracePeriod=15 Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.402070 4822 generic.go:334] "Generic (PLEG): container finished" podID="bb158e82-c2df-449e-a2ad-a73731f5965b" containerID="4c03751fc76eb0bb481b09fcfb35582f3f3639658b34add5eb47204b3de03ce7" exitCode=0 Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.402214 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tmzzx" event={"ID":"bb158e82-c2df-449e-a2ad-a73731f5965b","Type":"ContainerDied","Data":"4c03751fc76eb0bb481b09fcfb35582f3f3639658b34add5eb47204b3de03ce7"} Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.404339 4822 generic.go:334] "Generic (PLEG): container finished" podID="751981f5-4bd9-42fd-888e-2407c6a197ca" containerID="2f7d2645e8295c8ada1025aadba9b33ca3b003c0bffc24fc2e256afad93f1fcb" exitCode=0 Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.404436 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vct48" event={"ID":"751981f5-4bd9-42fd-888e-2407c6a197ca","Type":"ContainerDied","Data":"2f7d2645e8295c8ada1025aadba9b33ca3b003c0bffc24fc2e256afad93f1fcb"} Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.503333 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tmzzx" Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.593881 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb158e82-c2df-449e-a2ad-a73731f5965b-utilities\") pod \"bb158e82-c2df-449e-a2ad-a73731f5965b\" (UID: \"bb158e82-c2df-449e-a2ad-a73731f5965b\") " Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.594144 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ldh2\" (UniqueName: \"kubernetes.io/projected/bb158e82-c2df-449e-a2ad-a73731f5965b-kube-api-access-2ldh2\") pod \"bb158e82-c2df-449e-a2ad-a73731f5965b\" (UID: \"bb158e82-c2df-449e-a2ad-a73731f5965b\") " Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.594216 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb158e82-c2df-449e-a2ad-a73731f5965b-catalog-content\") pod \"bb158e82-c2df-449e-a2ad-a73731f5965b\" (UID: \"bb158e82-c2df-449e-a2ad-a73731f5965b\") " Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.595170 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb158e82-c2df-449e-a2ad-a73731f5965b-utilities" (OuterVolumeSpecName: "utilities") pod "bb158e82-c2df-449e-a2ad-a73731f5965b" (UID: "bb158e82-c2df-449e-a2ad-a73731f5965b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.603059 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb158e82-c2df-449e-a2ad-a73731f5965b-kube-api-access-2ldh2" (OuterVolumeSpecName: "kube-api-access-2ldh2") pod "bb158e82-c2df-449e-a2ad-a73731f5965b" (UID: "bb158e82-c2df-449e-a2ad-a73731f5965b"). InnerVolumeSpecName "kube-api-access-2ldh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.651228 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.699382 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-session\") pod \"751981f5-4bd9-42fd-888e-2407c6a197ca\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.699456 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-user-template-provider-selection\") pod \"751981f5-4bd9-42fd-888e-2407c6a197ca\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.699501 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-user-template-error\") pod \"751981f5-4bd9-42fd-888e-2407c6a197ca\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.699543 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-user-template-login\") pod \"751981f5-4bd9-42fd-888e-2407c6a197ca\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.699580 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-trusted-ca-bundle\") pod \"751981f5-4bd9-42fd-888e-2407c6a197ca\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.699615 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/751981f5-4bd9-42fd-888e-2407c6a197ca-audit-dir\") pod \"751981f5-4bd9-42fd-888e-2407c6a197ca\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.699653 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-cliconfig\") pod \"751981f5-4bd9-42fd-888e-2407c6a197ca\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.699708 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-service-ca\") pod \"751981f5-4bd9-42fd-888e-2407c6a197ca\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.699765 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-user-idp-0-file-data\") pod \"751981f5-4bd9-42fd-888e-2407c6a197ca\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.699839 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52tsm\" (UniqueName: \"kubernetes.io/projected/751981f5-4bd9-42fd-888e-2407c6a197ca-kube-api-access-52tsm\") pod \"751981f5-4bd9-42fd-888e-2407c6a197ca\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.699881 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/751981f5-4bd9-42fd-888e-2407c6a197ca-audit-policies\") pod \"751981f5-4bd9-42fd-888e-2407c6a197ca\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.699941 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-router-certs\") pod \"751981f5-4bd9-42fd-888e-2407c6a197ca\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.699998 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-serving-cert\") pod \"751981f5-4bd9-42fd-888e-2407c6a197ca\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.700065 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-ocp-branding-template\") pod \"751981f5-4bd9-42fd-888e-2407c6a197ca\" (UID: \"751981f5-4bd9-42fd-888e-2407c6a197ca\") " Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.700426 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ldh2\" (UniqueName: \"kubernetes.io/projected/bb158e82-c2df-449e-a2ad-a73731f5965b-kube-api-access-2ldh2\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.700450 4822 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb158e82-c2df-449e-a2ad-a73731f5965b-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.701535 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/751981f5-4bd9-42fd-888e-2407c6a197ca-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "751981f5-4bd9-42fd-888e-2407c6a197ca" (UID: "751981f5-4bd9-42fd-888e-2407c6a197ca"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.701596 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "751981f5-4bd9-42fd-888e-2407c6a197ca" (UID: "751981f5-4bd9-42fd-888e-2407c6a197ca"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.704901 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "751981f5-4bd9-42fd-888e-2407c6a197ca" (UID: "751981f5-4bd9-42fd-888e-2407c6a197ca"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.706151 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "751981f5-4bd9-42fd-888e-2407c6a197ca" (UID: "751981f5-4bd9-42fd-888e-2407c6a197ca"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.706163 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "751981f5-4bd9-42fd-888e-2407c6a197ca" (UID: "751981f5-4bd9-42fd-888e-2407c6a197ca"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.707728 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "751981f5-4bd9-42fd-888e-2407c6a197ca" (UID: "751981f5-4bd9-42fd-888e-2407c6a197ca"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.708458 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "751981f5-4bd9-42fd-888e-2407c6a197ca" (UID: "751981f5-4bd9-42fd-888e-2407c6a197ca"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.709627 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/751981f5-4bd9-42fd-888e-2407c6a197ca-kube-api-access-52tsm" (OuterVolumeSpecName: "kube-api-access-52tsm") pod "751981f5-4bd9-42fd-888e-2407c6a197ca" (UID: "751981f5-4bd9-42fd-888e-2407c6a197ca"). InnerVolumeSpecName "kube-api-access-52tsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.709734 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/751981f5-4bd9-42fd-888e-2407c6a197ca-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "751981f5-4bd9-42fd-888e-2407c6a197ca" (UID: "751981f5-4bd9-42fd-888e-2407c6a197ca"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.711251 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "751981f5-4bd9-42fd-888e-2407c6a197ca" (UID: "751981f5-4bd9-42fd-888e-2407c6a197ca"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.712762 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "751981f5-4bd9-42fd-888e-2407c6a197ca" (UID: "751981f5-4bd9-42fd-888e-2407c6a197ca"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.715449 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "751981f5-4bd9-42fd-888e-2407c6a197ca" (UID: "751981f5-4bd9-42fd-888e-2407c6a197ca"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.715835 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "751981f5-4bd9-42fd-888e-2407c6a197ca" (UID: "751981f5-4bd9-42fd-888e-2407c6a197ca"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.717954 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "751981f5-4bd9-42fd-888e-2407c6a197ca" (UID: "751981f5-4bd9-42fd-888e-2407c6a197ca"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.720352 4822 ???:1] "http: TLS handshake error from 192.168.126.11:48240: no serving certificate available for the kubelet" Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.744726 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb158e82-c2df-449e-a2ad-a73731f5965b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb158e82-c2df-449e-a2ad-a73731f5965b" (UID: "bb158e82-c2df-449e-a2ad-a73731f5965b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.801430 4822 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.801727 4822 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.801799 4822 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.801855 4822 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.801931 4822 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.802000 4822 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/751981f5-4bd9-42fd-888e-2407c6a197ca-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.802060 4822 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.802117 4822 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb158e82-c2df-449e-a2ad-a73731f5965b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.802170 4822 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.802222 4822 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.802274 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52tsm\" (UniqueName: \"kubernetes.io/projected/751981f5-4bd9-42fd-888e-2407c6a197ca-kube-api-access-52tsm\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.802337 4822 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/751981f5-4bd9-42fd-888e-2407c6a197ca-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.802390 4822 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.802448 4822 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:29 crc kubenswrapper[4822]: I0224 09:12:29.802506 4822 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/751981f5-4bd9-42fd-888e-2407c6a197ca-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:30 crc kubenswrapper[4822]: I0224 09:12:30.412524 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tmzzx" event={"ID":"bb158e82-c2df-449e-a2ad-a73731f5965b","Type":"ContainerDied","Data":"fc111fb567bcefab45e255fcda4ba2f6f1676d2a6fee29336a5db690c48fadc9"} Feb 24 09:12:30 crc kubenswrapper[4822]: I0224 09:12:30.412804 4822 scope.go:117] "RemoveContainer" containerID="4c03751fc76eb0bb481b09fcfb35582f3f3639658b34add5eb47204b3de03ce7" Feb 24 09:12:30 crc kubenswrapper[4822]: I0224 09:12:30.412964 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tmzzx" Feb 24 09:12:30 crc kubenswrapper[4822]: I0224 09:12:30.417270 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vct48" event={"ID":"751981f5-4bd9-42fd-888e-2407c6a197ca","Type":"ContainerDied","Data":"ba51c796c73dbc7c6fca7b9fb40c8ec89f9af8338f562b5791b17e39287a027a"} Feb 24 09:12:30 crc kubenswrapper[4822]: I0224 09:12:30.417425 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vct48" Feb 24 09:12:30 crc kubenswrapper[4822]: I0224 09:12:30.442187 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tmzzx"] Feb 24 09:12:30 crc kubenswrapper[4822]: I0224 09:12:30.455044 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tmzzx"] Feb 24 09:12:30 crc kubenswrapper[4822]: I0224 09:12:30.468357 4822 scope.go:117] "RemoveContainer" containerID="6de92bdb09cf07e9cffbd517d60960696184fd0f9159a76aea5b00183891402f" Feb 24 09:12:30 crc kubenswrapper[4822]: I0224 09:12:30.469809 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vct48"] Feb 24 09:12:30 crc kubenswrapper[4822]: I0224 09:12:30.473337 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vct48"] Feb 24 09:12:30 crc kubenswrapper[4822]: I0224 09:12:30.485326 4822 scope.go:117] "RemoveContainer" containerID="83230ecab1e9147b63c87792b8c2c0caf8f134c81add3ca57a016a0ad229ff65" Feb 24 09:12:30 crc kubenswrapper[4822]: I0224 09:12:30.505740 4822 scope.go:117] "RemoveContainer" containerID="2f7d2645e8295c8ada1025aadba9b33ca3b003c0bffc24fc2e256afad93f1fcb" Feb 24 09:12:30 crc kubenswrapper[4822]: E0224 09:12:30.517773 4822 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb158e82_c2df_449e_a2ad_a73731f5965b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod751981f5_4bd9_42fd_888e_2407c6a197ca.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb158e82_c2df_449e_a2ad_a73731f5965b.slice/crio-fc111fb567bcefab45e255fcda4ba2f6f1676d2a6fee29336a5db690c48fadc9\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod751981f5_4bd9_42fd_888e_2407c6a197ca.slice/crio-ba51c796c73dbc7c6fca7b9fb40c8ec89f9af8338f562b5791b17e39287a027a\": RecentStats: unable to find data in memory cache]" Feb 24 09:12:30 crc kubenswrapper[4822]: I0224 09:12:30.992500 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5f8d6577d6-6bnv5"] Feb 24 09:12:30 crc kubenswrapper[4822]: I0224 09:12:30.992818 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5f8d6577d6-6bnv5" podUID="20875278-2ac1-4f4e-bacc-016d046240fb" containerName="controller-manager" containerID="cri-o://ddb33c5e58b6fbec36fbaeafa4ea2e1b0659d8c4aa5b4b549a0d125cee2050df" gracePeriod=30 Feb 24 09:12:31 crc kubenswrapper[4822]: I0224 09:12:31.021971 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7776888b98-lsztf"] Feb 24 09:12:31 crc kubenswrapper[4822]: I0224 09:12:31.022273 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7776888b98-lsztf" podUID="7fc8c929-7ba6-470b-b4f1-33e654102c24" containerName="route-controller-manager" containerID="cri-o://b7642a5404403f72a5a9499b3963dca5cf363e72380c70dc675a054a8db7af73" gracePeriod=30 Feb 24 09:12:31 crc kubenswrapper[4822]: I0224 09:12:31.427730 4822 generic.go:334] "Generic (PLEG): container finished" podID="7fc8c929-7ba6-470b-b4f1-33e654102c24" containerID="b7642a5404403f72a5a9499b3963dca5cf363e72380c70dc675a054a8db7af73" exitCode=0 Feb 24 09:12:31 crc kubenswrapper[4822]: I0224 09:12:31.427795 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7776888b98-lsztf" event={"ID":"7fc8c929-7ba6-470b-b4f1-33e654102c24","Type":"ContainerDied","Data":"b7642a5404403f72a5a9499b3963dca5cf363e72380c70dc675a054a8db7af73"} Feb 24 09:12:31 crc kubenswrapper[4822]: I0224 09:12:31.435302 4822 generic.go:334] "Generic (PLEG): container finished" podID="20875278-2ac1-4f4e-bacc-016d046240fb" containerID="ddb33c5e58b6fbec36fbaeafa4ea2e1b0659d8c4aa5b4b549a0d125cee2050df" exitCode=0 Feb 24 09:12:31 crc kubenswrapper[4822]: I0224 09:12:31.435349 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f8d6577d6-6bnv5" event={"ID":"20875278-2ac1-4f4e-bacc-016d046240fb","Type":"ContainerDied","Data":"ddb33c5e58b6fbec36fbaeafa4ea2e1b0659d8c4aa5b4b549a0d125cee2050df"} Feb 24 09:12:31 crc kubenswrapper[4822]: I0224 09:12:31.569782 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7776888b98-lsztf" Feb 24 09:12:31 crc kubenswrapper[4822]: I0224 09:12:31.614120 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f8d6577d6-6bnv5" Feb 24 09:12:31 crc kubenswrapper[4822]: I0224 09:12:31.627236 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fc8c929-7ba6-470b-b4f1-33e654102c24-config\") pod \"7fc8c929-7ba6-470b-b4f1-33e654102c24\" (UID: \"7fc8c929-7ba6-470b-b4f1-33e654102c24\") " Feb 24 09:12:31 crc kubenswrapper[4822]: I0224 09:12:31.627365 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fc8c929-7ba6-470b-b4f1-33e654102c24-serving-cert\") pod \"7fc8c929-7ba6-470b-b4f1-33e654102c24\" (UID: \"7fc8c929-7ba6-470b-b4f1-33e654102c24\") " Feb 24 09:12:31 crc kubenswrapper[4822]: I0224 09:12:31.627432 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7fc8c929-7ba6-470b-b4f1-33e654102c24-client-ca\") pod \"7fc8c929-7ba6-470b-b4f1-33e654102c24\" (UID: \"7fc8c929-7ba6-470b-b4f1-33e654102c24\") " Feb 24 09:12:31 crc kubenswrapper[4822]: I0224 09:12:31.627501 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpbdm\" (UniqueName: \"kubernetes.io/projected/7fc8c929-7ba6-470b-b4f1-33e654102c24-kube-api-access-lpbdm\") pod \"7fc8c929-7ba6-470b-b4f1-33e654102c24\" (UID: \"7fc8c929-7ba6-470b-b4f1-33e654102c24\") " Feb 24 09:12:31 crc kubenswrapper[4822]: I0224 09:12:31.628529 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fc8c929-7ba6-470b-b4f1-33e654102c24-client-ca" (OuterVolumeSpecName: "client-ca") pod "7fc8c929-7ba6-470b-b4f1-33e654102c24" (UID: "7fc8c929-7ba6-470b-b4f1-33e654102c24"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:12:31 crc kubenswrapper[4822]: I0224 09:12:31.628604 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fc8c929-7ba6-470b-b4f1-33e654102c24-config" (OuterVolumeSpecName: "config") pod "7fc8c929-7ba6-470b-b4f1-33e654102c24" (UID: "7fc8c929-7ba6-470b-b4f1-33e654102c24"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:12:31 crc kubenswrapper[4822]: I0224 09:12:31.639953 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fc8c929-7ba6-470b-b4f1-33e654102c24-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7fc8c929-7ba6-470b-b4f1-33e654102c24" (UID: "7fc8c929-7ba6-470b-b4f1-33e654102c24"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:12:31 crc kubenswrapper[4822]: I0224 09:12:31.640373 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fc8c929-7ba6-470b-b4f1-33e654102c24-kube-api-access-lpbdm" (OuterVolumeSpecName: "kube-api-access-lpbdm") pod "7fc8c929-7ba6-470b-b4f1-33e654102c24" (UID: "7fc8c929-7ba6-470b-b4f1-33e654102c24"). InnerVolumeSpecName "kube-api-access-lpbdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:12:31 crc kubenswrapper[4822]: I0224 09:12:31.729996 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/20875278-2ac1-4f4e-bacc-016d046240fb-proxy-ca-bundles\") pod \"20875278-2ac1-4f4e-bacc-016d046240fb\" (UID: \"20875278-2ac1-4f4e-bacc-016d046240fb\") " Feb 24 09:12:31 crc kubenswrapper[4822]: I0224 09:12:31.730081 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20875278-2ac1-4f4e-bacc-016d046240fb-serving-cert\") pod \"20875278-2ac1-4f4e-bacc-016d046240fb\" (UID: \"20875278-2ac1-4f4e-bacc-016d046240fb\") " Feb 24 09:12:31 crc kubenswrapper[4822]: I0224 09:12:31.730225 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tbxm\" (UniqueName: \"kubernetes.io/projected/20875278-2ac1-4f4e-bacc-016d046240fb-kube-api-access-5tbxm\") pod \"20875278-2ac1-4f4e-bacc-016d046240fb\" (UID: \"20875278-2ac1-4f4e-bacc-016d046240fb\") " Feb 24 09:12:31 crc kubenswrapper[4822]: I0224 09:12:31.730292 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20875278-2ac1-4f4e-bacc-016d046240fb-config\") pod \"20875278-2ac1-4f4e-bacc-016d046240fb\" (UID: \"20875278-2ac1-4f4e-bacc-016d046240fb\") " Feb 24 09:12:31 crc kubenswrapper[4822]: I0224 09:12:31.730335 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20875278-2ac1-4f4e-bacc-016d046240fb-client-ca\") pod \"20875278-2ac1-4f4e-bacc-016d046240fb\" (UID: \"20875278-2ac1-4f4e-bacc-016d046240fb\") " Feb 24 09:12:31 crc kubenswrapper[4822]: I0224 09:12:31.730745 4822 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fc8c929-7ba6-470b-b4f1-33e654102c24-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:31 crc kubenswrapper[4822]: I0224 09:12:31.730778 4822 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fc8c929-7ba6-470b-b4f1-33e654102c24-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:31 crc kubenswrapper[4822]: I0224 09:12:31.730799 4822 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7fc8c929-7ba6-470b-b4f1-33e654102c24-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:31 crc kubenswrapper[4822]: I0224 09:12:31.730819 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpbdm\" (UniqueName: \"kubernetes.io/projected/7fc8c929-7ba6-470b-b4f1-33e654102c24-kube-api-access-lpbdm\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:31 crc kubenswrapper[4822]: I0224 09:12:31.731307 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20875278-2ac1-4f4e-bacc-016d046240fb-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "20875278-2ac1-4f4e-bacc-016d046240fb" (UID: "20875278-2ac1-4f4e-bacc-016d046240fb"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:12:31 crc kubenswrapper[4822]: I0224 09:12:31.732745 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20875278-2ac1-4f4e-bacc-016d046240fb-config" (OuterVolumeSpecName: "config") pod "20875278-2ac1-4f4e-bacc-016d046240fb" (UID: "20875278-2ac1-4f4e-bacc-016d046240fb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:12:31 crc kubenswrapper[4822]: I0224 09:12:31.733565 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20875278-2ac1-4f4e-bacc-016d046240fb-client-ca" (OuterVolumeSpecName: "client-ca") pod "20875278-2ac1-4f4e-bacc-016d046240fb" (UID: "20875278-2ac1-4f4e-bacc-016d046240fb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:12:31 crc kubenswrapper[4822]: I0224 09:12:31.734798 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20875278-2ac1-4f4e-bacc-016d046240fb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "20875278-2ac1-4f4e-bacc-016d046240fb" (UID: "20875278-2ac1-4f4e-bacc-016d046240fb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:12:31 crc kubenswrapper[4822]: I0224 09:12:31.737279 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20875278-2ac1-4f4e-bacc-016d046240fb-kube-api-access-5tbxm" (OuterVolumeSpecName: "kube-api-access-5tbxm") pod "20875278-2ac1-4f4e-bacc-016d046240fb" (UID: "20875278-2ac1-4f4e-bacc-016d046240fb"). InnerVolumeSpecName "kube-api-access-5tbxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:12:31 crc kubenswrapper[4822]: I0224 09:12:31.831765 4822 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/20875278-2ac1-4f4e-bacc-016d046240fb-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:31 crc kubenswrapper[4822]: I0224 09:12:31.831826 4822 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20875278-2ac1-4f4e-bacc-016d046240fb-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:31 crc kubenswrapper[4822]: I0224 09:12:31.831847 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tbxm\" (UniqueName: \"kubernetes.io/projected/20875278-2ac1-4f4e-bacc-016d046240fb-kube-api-access-5tbxm\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:31 crc kubenswrapper[4822]: I0224 09:12:31.831867 4822 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20875278-2ac1-4f4e-bacc-016d046240fb-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:31 crc kubenswrapper[4822]: I0224 09:12:31.831884 4822 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20875278-2ac1-4f4e-bacc-016d046240fb-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.176724 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-868999fb6-zm7cz"] Feb 24 09:12:32 crc kubenswrapper[4822]: E0224 09:12:32.176973 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a5b1d8c-f894-4c8a-b82b-052aa58260e2" containerName="extract-utilities" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.176988 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a5b1d8c-f894-4c8a-b82b-052aa58260e2" containerName="extract-utilities" Feb 24 09:12:32 crc kubenswrapper[4822]: E0224 09:12:32.176999 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="751981f5-4bd9-42fd-888e-2407c6a197ca" containerName="oauth-openshift" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.177005 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="751981f5-4bd9-42fd-888e-2407c6a197ca" containerName="oauth-openshift" Feb 24 09:12:32 crc kubenswrapper[4822]: E0224 09:12:32.177014 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02875d0b-b186-4481-bf06-923e1d91f53f" containerName="extract-utilities" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.177020 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="02875d0b-b186-4481-bf06-923e1d91f53f" containerName="extract-utilities" Feb 24 09:12:32 crc kubenswrapper[4822]: E0224 09:12:32.177028 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb158e82-c2df-449e-a2ad-a73731f5965b" containerName="registry-server" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.177034 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb158e82-c2df-449e-a2ad-a73731f5965b" containerName="registry-server" Feb 24 09:12:32 crc kubenswrapper[4822]: E0224 09:12:32.177039 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20875278-2ac1-4f4e-bacc-016d046240fb" containerName="controller-manager" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.177046 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="20875278-2ac1-4f4e-bacc-016d046240fb" containerName="controller-manager" Feb 24 09:12:32 crc kubenswrapper[4822]: E0224 09:12:32.177053 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02875d0b-b186-4481-bf06-923e1d91f53f" containerName="registry-server" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.177058 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="02875d0b-b186-4481-bf06-923e1d91f53f" containerName="registry-server" Feb 24 09:12:32 crc kubenswrapper[4822]: E0224 09:12:32.177065 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a5b1d8c-f894-4c8a-b82b-052aa58260e2" containerName="registry-server" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.177073 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a5b1d8c-f894-4c8a-b82b-052aa58260e2" containerName="registry-server" Feb 24 09:12:32 crc kubenswrapper[4822]: E0224 09:12:32.177083 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46112603-12a1-4bde-8442-c9675eb2c5f0" containerName="extract-content" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.177088 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="46112603-12a1-4bde-8442-c9675eb2c5f0" containerName="extract-content" Feb 24 09:12:32 crc kubenswrapper[4822]: E0224 09:12:32.177096 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46112603-12a1-4bde-8442-c9675eb2c5f0" containerName="extract-utilities" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.177101 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="46112603-12a1-4bde-8442-c9675eb2c5f0" containerName="extract-utilities" Feb 24 09:12:32 crc kubenswrapper[4822]: E0224 09:12:32.177114 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a5b1d8c-f894-4c8a-b82b-052aa58260e2" containerName="extract-content" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.177119 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a5b1d8c-f894-4c8a-b82b-052aa58260e2" containerName="extract-content" Feb 24 09:12:32 crc kubenswrapper[4822]: E0224 09:12:32.177128 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb158e82-c2df-449e-a2ad-a73731f5965b" containerName="extract-content" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.177134 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb158e82-c2df-449e-a2ad-a73731f5965b" containerName="extract-content" Feb 24 09:12:32 crc kubenswrapper[4822]: E0224 09:12:32.177142 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46112603-12a1-4bde-8442-c9675eb2c5f0" containerName="registry-server" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.177147 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="46112603-12a1-4bde-8442-c9675eb2c5f0" containerName="registry-server" Feb 24 09:12:32 crc kubenswrapper[4822]: E0224 09:12:32.177155 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02875d0b-b186-4481-bf06-923e1d91f53f" containerName="extract-content" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.177160 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="02875d0b-b186-4481-bf06-923e1d91f53f" containerName="extract-content" Feb 24 09:12:32 crc kubenswrapper[4822]: E0224 09:12:32.177167 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb158e82-c2df-449e-a2ad-a73731f5965b" containerName="extract-utilities" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.177172 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb158e82-c2df-449e-a2ad-a73731f5965b" containerName="extract-utilities" Feb 24 09:12:32 crc kubenswrapper[4822]: E0224 09:12:32.177181 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fc8c929-7ba6-470b-b4f1-33e654102c24" containerName="route-controller-manager" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.177187 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fc8c929-7ba6-470b-b4f1-33e654102c24" containerName="route-controller-manager" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.177270 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="751981f5-4bd9-42fd-888e-2407c6a197ca" containerName="oauth-openshift" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.177283 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="46112603-12a1-4bde-8442-c9675eb2c5f0" containerName="registry-server" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.177290 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="02875d0b-b186-4481-bf06-923e1d91f53f" containerName="registry-server" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.177297 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a5b1d8c-f894-4c8a-b82b-052aa58260e2" containerName="registry-server" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.177305 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="20875278-2ac1-4f4e-bacc-016d046240fb" containerName="controller-manager" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.177314 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb158e82-c2df-449e-a2ad-a73731f5965b" containerName="registry-server" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.177323 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fc8c929-7ba6-470b-b4f1-33e654102c24" containerName="route-controller-manager" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.177734 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-868999fb6-zm7cz" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.184986 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f7f6c779c-p8pj5"] Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.186297 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f7f6c779c-p8pj5" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.194043 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-868999fb6-zm7cz"] Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.201163 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f7f6c779c-p8pj5"] Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.237037 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs4jk\" (UniqueName: \"kubernetes.io/projected/de72cec0-fad5-48b0-a0f2-a5abefd9ba79-kube-api-access-bs4jk\") pod \"route-controller-manager-f7f6c779c-p8pj5\" (UID: \"de72cec0-fad5-48b0-a0f2-a5abefd9ba79\") " pod="openshift-route-controller-manager/route-controller-manager-f7f6c779c-p8pj5" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.237092 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgkjz\" (UniqueName: \"kubernetes.io/projected/1f7081d5-2ccc-444e-9260-02627744ec2a-kube-api-access-hgkjz\") pod \"controller-manager-868999fb6-zm7cz\" (UID: \"1f7081d5-2ccc-444e-9260-02627744ec2a\") " pod="openshift-controller-manager/controller-manager-868999fb6-zm7cz" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.237135 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de72cec0-fad5-48b0-a0f2-a5abefd9ba79-serving-cert\") pod \"route-controller-manager-f7f6c779c-p8pj5\" (UID: \"de72cec0-fad5-48b0-a0f2-a5abefd9ba79\") " pod="openshift-route-controller-manager/route-controller-manager-f7f6c779c-p8pj5" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.237157 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f7081d5-2ccc-444e-9260-02627744ec2a-serving-cert\") pod \"controller-manager-868999fb6-zm7cz\" (UID: \"1f7081d5-2ccc-444e-9260-02627744ec2a\") " pod="openshift-controller-manager/controller-manager-868999fb6-zm7cz" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.237229 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f7081d5-2ccc-444e-9260-02627744ec2a-proxy-ca-bundles\") pod \"controller-manager-868999fb6-zm7cz\" (UID: \"1f7081d5-2ccc-444e-9260-02627744ec2a\") " pod="openshift-controller-manager/controller-manager-868999fb6-zm7cz" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.237255 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f7081d5-2ccc-444e-9260-02627744ec2a-client-ca\") pod \"controller-manager-868999fb6-zm7cz\" (UID: \"1f7081d5-2ccc-444e-9260-02627744ec2a\") " pod="openshift-controller-manager/controller-manager-868999fb6-zm7cz" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.237282 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de72cec0-fad5-48b0-a0f2-a5abefd9ba79-config\") pod \"route-controller-manager-f7f6c779c-p8pj5\" (UID: \"de72cec0-fad5-48b0-a0f2-a5abefd9ba79\") " pod="openshift-route-controller-manager/route-controller-manager-f7f6c779c-p8pj5" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.237304 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f7081d5-2ccc-444e-9260-02627744ec2a-config\") pod \"controller-manager-868999fb6-zm7cz\" (UID: \"1f7081d5-2ccc-444e-9260-02627744ec2a\") " pod="openshift-controller-manager/controller-manager-868999fb6-zm7cz" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.237350 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de72cec0-fad5-48b0-a0f2-a5abefd9ba79-client-ca\") pod \"route-controller-manager-f7f6c779c-p8pj5\" (UID: \"de72cec0-fad5-48b0-a0f2-a5abefd9ba79\") " pod="openshift-route-controller-manager/route-controller-manager-f7f6c779c-p8pj5" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.338669 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de72cec0-fad5-48b0-a0f2-a5abefd9ba79-serving-cert\") pod \"route-controller-manager-f7f6c779c-p8pj5\" (UID: \"de72cec0-fad5-48b0-a0f2-a5abefd9ba79\") " pod="openshift-route-controller-manager/route-controller-manager-f7f6c779c-p8pj5" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.339785 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f7081d5-2ccc-444e-9260-02627744ec2a-serving-cert\") pod \"controller-manager-868999fb6-zm7cz\" (UID: \"1f7081d5-2ccc-444e-9260-02627744ec2a\") " pod="openshift-controller-manager/controller-manager-868999fb6-zm7cz" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.339840 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f7081d5-2ccc-444e-9260-02627744ec2a-proxy-ca-bundles\") pod \"controller-manager-868999fb6-zm7cz\" (UID: \"1f7081d5-2ccc-444e-9260-02627744ec2a\") " pod="openshift-controller-manager/controller-manager-868999fb6-zm7cz" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.339883 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f7081d5-2ccc-444e-9260-02627744ec2a-client-ca\") pod \"controller-manager-868999fb6-zm7cz\" (UID: \"1f7081d5-2ccc-444e-9260-02627744ec2a\") " pod="openshift-controller-manager/controller-manager-868999fb6-zm7cz" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.339955 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de72cec0-fad5-48b0-a0f2-a5abefd9ba79-config\") pod \"route-controller-manager-f7f6c779c-p8pj5\" (UID: \"de72cec0-fad5-48b0-a0f2-a5abefd9ba79\") " pod="openshift-route-controller-manager/route-controller-manager-f7f6c779c-p8pj5" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.340007 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f7081d5-2ccc-444e-9260-02627744ec2a-config\") pod \"controller-manager-868999fb6-zm7cz\" (UID: \"1f7081d5-2ccc-444e-9260-02627744ec2a\") " pod="openshift-controller-manager/controller-manager-868999fb6-zm7cz" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.340104 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de72cec0-fad5-48b0-a0f2-a5abefd9ba79-client-ca\") pod \"route-controller-manager-f7f6c779c-p8pj5\" (UID: \"de72cec0-fad5-48b0-a0f2-a5abefd9ba79\") " pod="openshift-route-controller-manager/route-controller-manager-f7f6c779c-p8pj5" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.340182 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs4jk\" (UniqueName: \"kubernetes.io/projected/de72cec0-fad5-48b0-a0f2-a5abefd9ba79-kube-api-access-bs4jk\") pod \"route-controller-manager-f7f6c779c-p8pj5\" (UID: \"de72cec0-fad5-48b0-a0f2-a5abefd9ba79\") " pod="openshift-route-controller-manager/route-controller-manager-f7f6c779c-p8pj5" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.340283 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgkjz\" (UniqueName: \"kubernetes.io/projected/1f7081d5-2ccc-444e-9260-02627744ec2a-kube-api-access-hgkjz\") pod \"controller-manager-868999fb6-zm7cz\" (UID: \"1f7081d5-2ccc-444e-9260-02627744ec2a\") " pod="openshift-controller-manager/controller-manager-868999fb6-zm7cz" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.340750 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f7081d5-2ccc-444e-9260-02627744ec2a-client-ca\") pod \"controller-manager-868999fb6-zm7cz\" (UID: \"1f7081d5-2ccc-444e-9260-02627744ec2a\") " pod="openshift-controller-manager/controller-manager-868999fb6-zm7cz" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.341372 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de72cec0-fad5-48b0-a0f2-a5abefd9ba79-client-ca\") pod \"route-controller-manager-f7f6c779c-p8pj5\" (UID: \"de72cec0-fad5-48b0-a0f2-a5abefd9ba79\") " pod="openshift-route-controller-manager/route-controller-manager-f7f6c779c-p8pj5" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.341578 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f7081d5-2ccc-444e-9260-02627744ec2a-config\") pod \"controller-manager-868999fb6-zm7cz\" (UID: \"1f7081d5-2ccc-444e-9260-02627744ec2a\") " pod="openshift-controller-manager/controller-manager-868999fb6-zm7cz" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.342663 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de72cec0-fad5-48b0-a0f2-a5abefd9ba79-config\") pod \"route-controller-manager-f7f6c779c-p8pj5\" (UID: \"de72cec0-fad5-48b0-a0f2-a5abefd9ba79\") " pod="openshift-route-controller-manager/route-controller-manager-f7f6c779c-p8pj5" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.344283 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f7081d5-2ccc-444e-9260-02627744ec2a-proxy-ca-bundles\") pod \"controller-manager-868999fb6-zm7cz\" (UID: \"1f7081d5-2ccc-444e-9260-02627744ec2a\") " pod="openshift-controller-manager/controller-manager-868999fb6-zm7cz" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.345585 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f7081d5-2ccc-444e-9260-02627744ec2a-serving-cert\") pod \"controller-manager-868999fb6-zm7cz\" (UID: \"1f7081d5-2ccc-444e-9260-02627744ec2a\") " pod="openshift-controller-manager/controller-manager-868999fb6-zm7cz" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.354211 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de72cec0-fad5-48b0-a0f2-a5abefd9ba79-serving-cert\") pod \"route-controller-manager-f7f6c779c-p8pj5\" (UID: \"de72cec0-fad5-48b0-a0f2-a5abefd9ba79\") " pod="openshift-route-controller-manager/route-controller-manager-f7f6c779c-p8pj5" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.356572 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="751981f5-4bd9-42fd-888e-2407c6a197ca" path="/var/lib/kubelet/pods/751981f5-4bd9-42fd-888e-2407c6a197ca/volumes" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.358256 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb158e82-c2df-449e-a2ad-a73731f5965b" path="/var/lib/kubelet/pods/bb158e82-c2df-449e-a2ad-a73731f5965b/volumes" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.362127 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs4jk\" (UniqueName: \"kubernetes.io/projected/de72cec0-fad5-48b0-a0f2-a5abefd9ba79-kube-api-access-bs4jk\") pod \"route-controller-manager-f7f6c779c-p8pj5\" (UID: \"de72cec0-fad5-48b0-a0f2-a5abefd9ba79\") " pod="openshift-route-controller-manager/route-controller-manager-f7f6c779c-p8pj5" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.364764 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgkjz\" (UniqueName: \"kubernetes.io/projected/1f7081d5-2ccc-444e-9260-02627744ec2a-kube-api-access-hgkjz\") pod \"controller-manager-868999fb6-zm7cz\" (UID: \"1f7081d5-2ccc-444e-9260-02627744ec2a\") " pod="openshift-controller-manager/controller-manager-868999fb6-zm7cz" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.443607 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7776888b98-lsztf" event={"ID":"7fc8c929-7ba6-470b-b4f1-33e654102c24","Type":"ContainerDied","Data":"6434e6fc7356a4255b810b0e1d1af375e4f58b716c191d2413a072e93182ddfb"} Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.443686 4822 scope.go:117] "RemoveContainer" containerID="b7642a5404403f72a5a9499b3963dca5cf363e72380c70dc675a054a8db7af73" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.443708 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7776888b98-lsztf" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.445358 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f8d6577d6-6bnv5" event={"ID":"20875278-2ac1-4f4e-bacc-016d046240fb","Type":"ContainerDied","Data":"64cc3b390a3f5ac2b9d92601daf05a76ffef2c541cbb81a3a6d076be20e97030"} Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.445414 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f8d6577d6-6bnv5" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.464226 4822 scope.go:117] "RemoveContainer" containerID="ddb33c5e58b6fbec36fbaeafa4ea2e1b0659d8c4aa5b4b549a0d125cee2050df" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.467159 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5f8d6577d6-6bnv5"] Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.469782 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5f8d6577d6-6bnv5"] Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.476610 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7776888b98-lsztf"] Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.479347 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7776888b98-lsztf"] Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.519592 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-868999fb6-zm7cz" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.538488 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f7f6c779c-p8pj5" Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.739950 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f7f6c779c-p8pj5"] Feb 24 09:12:32 crc kubenswrapper[4822]: I0224 09:12:32.790793 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-868999fb6-zm7cz"] Feb 24 09:12:32 crc kubenswrapper[4822]: W0224 09:12:32.808846 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f7081d5_2ccc_444e_9260_02627744ec2a.slice/crio-88f41177a4f2ce86fc10fddbc158063d0351deb4bb4f9e02d3eb3c0899762b88 WatchSource:0}: Error finding container 88f41177a4f2ce86fc10fddbc158063d0351deb4bb4f9e02d3eb3c0899762b88: Status 404 returned error can't find the container with id 88f41177a4f2ce86fc10fddbc158063d0351deb4bb4f9e02d3eb3c0899762b88 Feb 24 09:12:33 crc kubenswrapper[4822]: I0224 09:12:33.452113 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-868999fb6-zm7cz" event={"ID":"1f7081d5-2ccc-444e-9260-02627744ec2a","Type":"ContainerStarted","Data":"16c558b0e40470166e19fa96632eba1faebf5589dc5e7ce7ae8f130b1c94c48c"} Feb 24 09:12:33 crc kubenswrapper[4822]: I0224 09:12:33.452161 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-868999fb6-zm7cz" event={"ID":"1f7081d5-2ccc-444e-9260-02627744ec2a","Type":"ContainerStarted","Data":"88f41177a4f2ce86fc10fddbc158063d0351deb4bb4f9e02d3eb3c0899762b88"} Feb 24 09:12:33 crc kubenswrapper[4822]: I0224 09:12:33.452421 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-868999fb6-zm7cz" Feb 24 09:12:33 crc kubenswrapper[4822]: I0224 09:12:33.455512 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f7f6c779c-p8pj5" event={"ID":"de72cec0-fad5-48b0-a0f2-a5abefd9ba79","Type":"ContainerStarted","Data":"531cdae21dd18e922fb64c467e6516848043eafabc85eabbbca990e43674b436"} Feb 24 09:12:33 crc kubenswrapper[4822]: I0224 09:12:33.455544 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f7f6c779c-p8pj5" event={"ID":"de72cec0-fad5-48b0-a0f2-a5abefd9ba79","Type":"ContainerStarted","Data":"0a5ebd18b7f2ce9e183771c01624770b8a43580ed8013b57a255931371378e06"} Feb 24 09:12:33 crc kubenswrapper[4822]: I0224 09:12:33.455702 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-f7f6c779c-p8pj5" Feb 24 09:12:33 crc kubenswrapper[4822]: I0224 09:12:33.457905 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-868999fb6-zm7cz" Feb 24 09:12:33 crc kubenswrapper[4822]: I0224 09:12:33.460711 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-f7f6c779c-p8pj5" Feb 24 09:12:33 crc kubenswrapper[4822]: I0224 09:12:33.477958 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-868999fb6-zm7cz" podStartSLOduration=2.477938756 podStartE2EDuration="2.477938756s" podCreationTimestamp="2026-02-24 09:12:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:12:33.471860655 +0000 UTC m=+275.859623203" watchObservedRunningTime="2026-02-24 09:12:33.477938756 +0000 UTC m=+275.865701314" Feb 24 09:12:33 crc kubenswrapper[4822]: I0224 09:12:33.504156 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-f7f6c779c-p8pj5" podStartSLOduration=2.504139172 podStartE2EDuration="2.504139172s" podCreationTimestamp="2026-02-24 09:12:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:12:33.504116001 +0000 UTC m=+275.891878539" watchObservedRunningTime="2026-02-24 09:12:33.504139172 +0000 UTC m=+275.891901720" Feb 24 09:12:34 crc kubenswrapper[4822]: I0224 09:12:34.347815 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20875278-2ac1-4f4e-bacc-016d046240fb" path="/var/lib/kubelet/pods/20875278-2ac1-4f4e-bacc-016d046240fb/volumes" Feb 24 09:12:34 crc kubenswrapper[4822]: I0224 09:12:34.349442 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fc8c929-7ba6-470b-b4f1-33e654102c24" path="/var/lib/kubelet/pods/7fc8c929-7ba6-470b-b4f1-33e654102c24/volumes" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.174641 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6b5f774455-ln8mm"] Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.176251 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.183257 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.183293 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.183352 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.183477 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.183483 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.183839 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.184295 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.184435 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.184993 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.185030 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.187905 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.198305 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.201353 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.204770 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.213607 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.246870 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6b5f774455-ln8mm"] Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.287542 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d15d61d1-9e53-4007-9141-600a296d268f-v4-0-config-user-template-login\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.287584 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d15d61d1-9e53-4007-9141-600a296d268f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.287607 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d15d61d1-9e53-4007-9141-600a296d268f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.287640 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbrl9\" (UniqueName: \"kubernetes.io/projected/d15d61d1-9e53-4007-9141-600a296d268f-kube-api-access-fbrl9\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.287690 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d15d61d1-9e53-4007-9141-600a296d268f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.287717 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d15d61d1-9e53-4007-9141-600a296d268f-v4-0-config-system-router-certs\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.287762 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d15d61d1-9e53-4007-9141-600a296d268f-audit-policies\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.287780 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d15d61d1-9e53-4007-9141-600a296d268f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.287797 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d15d61d1-9e53-4007-9141-600a296d268f-v4-0-config-system-session\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.287842 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d15d61d1-9e53-4007-9141-600a296d268f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.287860 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d15d61d1-9e53-4007-9141-600a296d268f-audit-dir\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.287897 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d15d61d1-9e53-4007-9141-600a296d268f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.287933 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d15d61d1-9e53-4007-9141-600a296d268f-v4-0-config-user-template-error\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.287971 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d15d61d1-9e53-4007-9141-600a296d268f-v4-0-config-system-service-ca\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.389629 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d15d61d1-9e53-4007-9141-600a296d268f-audit-policies\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.389677 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d15d61d1-9e53-4007-9141-600a296d268f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.389700 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d15d61d1-9e53-4007-9141-600a296d268f-v4-0-config-system-session\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.389729 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d15d61d1-9e53-4007-9141-600a296d268f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.389747 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d15d61d1-9e53-4007-9141-600a296d268f-audit-dir\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.389764 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d15d61d1-9e53-4007-9141-600a296d268f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.389784 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d15d61d1-9e53-4007-9141-600a296d268f-v4-0-config-user-template-error\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.389805 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d15d61d1-9e53-4007-9141-600a296d268f-v4-0-config-system-service-ca\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.389830 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d15d61d1-9e53-4007-9141-600a296d268f-v4-0-config-user-template-login\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.389851 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d15d61d1-9e53-4007-9141-600a296d268f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.389871 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d15d61d1-9e53-4007-9141-600a296d268f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.389929 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbrl9\" (UniqueName: \"kubernetes.io/projected/d15d61d1-9e53-4007-9141-600a296d268f-kube-api-access-fbrl9\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.389946 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d15d61d1-9e53-4007-9141-600a296d268f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.389971 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d15d61d1-9e53-4007-9141-600a296d268f-v4-0-config-system-router-certs\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.390434 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d15d61d1-9e53-4007-9141-600a296d268f-audit-policies\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.390497 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d15d61d1-9e53-4007-9141-600a296d268f-audit-dir\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.390872 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d15d61d1-9e53-4007-9141-600a296d268f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.391595 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d15d61d1-9e53-4007-9141-600a296d268f-v4-0-config-system-service-ca\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.391867 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d15d61d1-9e53-4007-9141-600a296d268f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.395237 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d15d61d1-9e53-4007-9141-600a296d268f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.395548 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d15d61d1-9e53-4007-9141-600a296d268f-v4-0-config-user-template-error\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.395741 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d15d61d1-9e53-4007-9141-600a296d268f-v4-0-config-system-session\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.397076 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d15d61d1-9e53-4007-9141-600a296d268f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.397586 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d15d61d1-9e53-4007-9141-600a296d268f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.397715 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d15d61d1-9e53-4007-9141-600a296d268f-v4-0-config-system-router-certs\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.400944 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d15d61d1-9e53-4007-9141-600a296d268f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.405197 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d15d61d1-9e53-4007-9141-600a296d268f-v4-0-config-user-template-login\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.410727 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbrl9\" (UniqueName: \"kubernetes.io/projected/d15d61d1-9e53-4007-9141-600a296d268f-kube-api-access-fbrl9\") pod \"oauth-openshift-6b5f774455-ln8mm\" (UID: \"d15d61d1-9e53-4007-9141-600a296d268f\") " pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.501584 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:36 crc kubenswrapper[4822]: I0224 09:12:36.957878 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6b5f774455-ln8mm"] Feb 24 09:12:37 crc kubenswrapper[4822]: I0224 09:12:37.490906 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" event={"ID":"d15d61d1-9e53-4007-9141-600a296d268f","Type":"ContainerStarted","Data":"dbb1815e6fc364ae60889211f73ec94858a3843ed38a7be4ba1f23cb0de2600f"} Feb 24 09:12:37 crc kubenswrapper[4822]: I0224 09:12:37.492109 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:37 crc kubenswrapper[4822]: I0224 09:12:37.492213 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" event={"ID":"d15d61d1-9e53-4007-9141-600a296d268f","Type":"ContainerStarted","Data":"2f6732dff654be89e76240430fc626844b21860fc3100438d9f6a998dbb9d256"} Feb 24 09:12:37 crc kubenswrapper[4822]: I0224 09:12:37.528286 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" podStartSLOduration=33.528251184 podStartE2EDuration="33.528251184s" podCreationTimestamp="2026-02-24 09:12:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:12:37.522292537 +0000 UTC m=+279.910055085" watchObservedRunningTime="2026-02-24 09:12:37.528251184 +0000 UTC m=+279.916013772" Feb 24 09:12:37 crc kubenswrapper[4822]: I0224 09:12:37.999694 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6b5f774455-ln8mm" Feb 24 09:12:45 crc kubenswrapper[4822]: I0224 09:12:45.677061 4822 patch_prober.go:28] interesting pod/machine-config-daemon-qd752 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:12:45 crc kubenswrapper[4822]: I0224 09:12:45.677879 4822 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:12:45 crc kubenswrapper[4822]: I0224 09:12:45.677976 4822 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd752" Feb 24 09:12:45 crc kubenswrapper[4822]: I0224 09:12:45.678747 4822 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"00d72e2a80f6b15e7b1f86206da1682dfae29bb1bf1dcca20ca38556ba3fc2c0"} pod="openshift-machine-config-operator/machine-config-daemon-qd752" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 09:12:45 crc kubenswrapper[4822]: I0224 09:12:45.678894 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" containerID="cri-o://00d72e2a80f6b15e7b1f86206da1682dfae29bb1bf1dcca20ca38556ba3fc2c0" gracePeriod=600 Feb 24 09:12:46 crc kubenswrapper[4822]: I0224 09:12:46.558667 4822 generic.go:334] "Generic (PLEG): container finished" podID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerID="00d72e2a80f6b15e7b1f86206da1682dfae29bb1bf1dcca20ca38556ba3fc2c0" exitCode=0 Feb 24 09:12:46 crc kubenswrapper[4822]: I0224 09:12:46.558687 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" event={"ID":"306aba52-0b6e-4d3f-b05f-757daebc5e24","Type":"ContainerDied","Data":"00d72e2a80f6b15e7b1f86206da1682dfae29bb1bf1dcca20ca38556ba3fc2c0"} Feb 24 09:12:46 crc kubenswrapper[4822]: I0224 09:12:46.559216 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" event={"ID":"306aba52-0b6e-4d3f-b05f-757daebc5e24","Type":"ContainerStarted","Data":"f79b6484854bddb9f85478e90221a31ce3af1ac665d60b2449b51b6b7845fa55"} Feb 24 09:12:50 crc kubenswrapper[4822]: I0224 09:12:50.967728 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-868999fb6-zm7cz"] Feb 24 09:12:50 crc kubenswrapper[4822]: I0224 09:12:50.968343 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-868999fb6-zm7cz" podUID="1f7081d5-2ccc-444e-9260-02627744ec2a" containerName="controller-manager" containerID="cri-o://16c558b0e40470166e19fa96632eba1faebf5589dc5e7ce7ae8f130b1c94c48c" gracePeriod=30 Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.069056 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f7f6c779c-p8pj5"] Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.069698 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-f7f6c779c-p8pj5" podUID="de72cec0-fad5-48b0-a0f2-a5abefd9ba79" containerName="route-controller-manager" containerID="cri-o://531cdae21dd18e922fb64c467e6516848043eafabc85eabbbca990e43674b436" gracePeriod=30 Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.493969 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-868999fb6-zm7cz" Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.558643 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f7f6c779c-p8pj5" Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.596020 4822 generic.go:334] "Generic (PLEG): container finished" podID="de72cec0-fad5-48b0-a0f2-a5abefd9ba79" containerID="531cdae21dd18e922fb64c467e6516848043eafabc85eabbbca990e43674b436" exitCode=0 Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.596076 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f7f6c779c-p8pj5" Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.596115 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f7f6c779c-p8pj5" event={"ID":"de72cec0-fad5-48b0-a0f2-a5abefd9ba79","Type":"ContainerDied","Data":"531cdae21dd18e922fb64c467e6516848043eafabc85eabbbca990e43674b436"} Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.596179 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f7f6c779c-p8pj5" event={"ID":"de72cec0-fad5-48b0-a0f2-a5abefd9ba79","Type":"ContainerDied","Data":"0a5ebd18b7f2ce9e183771c01624770b8a43580ed8013b57a255931371378e06"} Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.596207 4822 scope.go:117] "RemoveContainer" containerID="531cdae21dd18e922fb64c467e6516848043eafabc85eabbbca990e43674b436" Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.597791 4822 generic.go:334] "Generic (PLEG): container finished" podID="1f7081d5-2ccc-444e-9260-02627744ec2a" containerID="16c558b0e40470166e19fa96632eba1faebf5589dc5e7ce7ae8f130b1c94c48c" exitCode=0 Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.597872 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-868999fb6-zm7cz" event={"ID":"1f7081d5-2ccc-444e-9260-02627744ec2a","Type":"ContainerDied","Data":"16c558b0e40470166e19fa96632eba1faebf5589dc5e7ce7ae8f130b1c94c48c"} Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.597897 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-868999fb6-zm7cz" Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.597964 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-868999fb6-zm7cz" event={"ID":"1f7081d5-2ccc-444e-9260-02627744ec2a","Type":"ContainerDied","Data":"88f41177a4f2ce86fc10fddbc158063d0351deb4bb4f9e02d3eb3c0899762b88"} Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.615091 4822 scope.go:117] "RemoveContainer" containerID="531cdae21dd18e922fb64c467e6516848043eafabc85eabbbca990e43674b436" Feb 24 09:12:51 crc kubenswrapper[4822]: E0224 09:12:51.615565 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"531cdae21dd18e922fb64c467e6516848043eafabc85eabbbca990e43674b436\": container with ID starting with 531cdae21dd18e922fb64c467e6516848043eafabc85eabbbca990e43674b436 not found: ID does not exist" containerID="531cdae21dd18e922fb64c467e6516848043eafabc85eabbbca990e43674b436" Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.615634 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"531cdae21dd18e922fb64c467e6516848043eafabc85eabbbca990e43674b436"} err="failed to get container status \"531cdae21dd18e922fb64c467e6516848043eafabc85eabbbca990e43674b436\": rpc error: code = NotFound desc = could not find container \"531cdae21dd18e922fb64c467e6516848043eafabc85eabbbca990e43674b436\": container with ID starting with 531cdae21dd18e922fb64c467e6516848043eafabc85eabbbca990e43674b436 not found: ID does not exist" Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.615671 4822 scope.go:117] "RemoveContainer" containerID="16c558b0e40470166e19fa96632eba1faebf5589dc5e7ce7ae8f130b1c94c48c" Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.625730 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgkjz\" (UniqueName: \"kubernetes.io/projected/1f7081d5-2ccc-444e-9260-02627744ec2a-kube-api-access-hgkjz\") pod \"1f7081d5-2ccc-444e-9260-02627744ec2a\" (UID: \"1f7081d5-2ccc-444e-9260-02627744ec2a\") " Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.625896 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f7081d5-2ccc-444e-9260-02627744ec2a-proxy-ca-bundles\") pod \"1f7081d5-2ccc-444e-9260-02627744ec2a\" (UID: \"1f7081d5-2ccc-444e-9260-02627744ec2a\") " Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.625970 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f7081d5-2ccc-444e-9260-02627744ec2a-client-ca\") pod \"1f7081d5-2ccc-444e-9260-02627744ec2a\" (UID: \"1f7081d5-2ccc-444e-9260-02627744ec2a\") " Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.626029 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f7081d5-2ccc-444e-9260-02627744ec2a-config\") pod \"1f7081d5-2ccc-444e-9260-02627744ec2a\" (UID: \"1f7081d5-2ccc-444e-9260-02627744ec2a\") " Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.626083 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f7081d5-2ccc-444e-9260-02627744ec2a-serving-cert\") pod \"1f7081d5-2ccc-444e-9260-02627744ec2a\" (UID: \"1f7081d5-2ccc-444e-9260-02627744ec2a\") " Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.626936 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f7081d5-2ccc-444e-9260-02627744ec2a-client-ca" (OuterVolumeSpecName: "client-ca") pod "1f7081d5-2ccc-444e-9260-02627744ec2a" (UID: "1f7081d5-2ccc-444e-9260-02627744ec2a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.627208 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f7081d5-2ccc-444e-9260-02627744ec2a-config" (OuterVolumeSpecName: "config") pod "1f7081d5-2ccc-444e-9260-02627744ec2a" (UID: "1f7081d5-2ccc-444e-9260-02627744ec2a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.628077 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f7081d5-2ccc-444e-9260-02627744ec2a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1f7081d5-2ccc-444e-9260-02627744ec2a" (UID: "1f7081d5-2ccc-444e-9260-02627744ec2a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.631233 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f7081d5-2ccc-444e-9260-02627744ec2a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1f7081d5-2ccc-444e-9260-02627744ec2a" (UID: "1f7081d5-2ccc-444e-9260-02627744ec2a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.631461 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f7081d5-2ccc-444e-9260-02627744ec2a-kube-api-access-hgkjz" (OuterVolumeSpecName: "kube-api-access-hgkjz") pod "1f7081d5-2ccc-444e-9260-02627744ec2a" (UID: "1f7081d5-2ccc-444e-9260-02627744ec2a"). InnerVolumeSpecName "kube-api-access-hgkjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.631569 4822 scope.go:117] "RemoveContainer" containerID="16c558b0e40470166e19fa96632eba1faebf5589dc5e7ce7ae8f130b1c94c48c" Feb 24 09:12:51 crc kubenswrapper[4822]: E0224 09:12:51.632057 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16c558b0e40470166e19fa96632eba1faebf5589dc5e7ce7ae8f130b1c94c48c\": container with ID starting with 16c558b0e40470166e19fa96632eba1faebf5589dc5e7ce7ae8f130b1c94c48c not found: ID does not exist" containerID="16c558b0e40470166e19fa96632eba1faebf5589dc5e7ce7ae8f130b1c94c48c" Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.632101 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16c558b0e40470166e19fa96632eba1faebf5589dc5e7ce7ae8f130b1c94c48c"} err="failed to get container status \"16c558b0e40470166e19fa96632eba1faebf5589dc5e7ce7ae8f130b1c94c48c\": rpc error: code = NotFound desc = could not find container \"16c558b0e40470166e19fa96632eba1faebf5589dc5e7ce7ae8f130b1c94c48c\": container with ID starting with 16c558b0e40470166e19fa96632eba1faebf5589dc5e7ce7ae8f130b1c94c48c not found: ID does not exist" Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.727334 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de72cec0-fad5-48b0-a0f2-a5abefd9ba79-config\") pod \"de72cec0-fad5-48b0-a0f2-a5abefd9ba79\" (UID: \"de72cec0-fad5-48b0-a0f2-a5abefd9ba79\") " Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.727488 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs4jk\" (UniqueName: \"kubernetes.io/projected/de72cec0-fad5-48b0-a0f2-a5abefd9ba79-kube-api-access-bs4jk\") pod \"de72cec0-fad5-48b0-a0f2-a5abefd9ba79\" (UID: \"de72cec0-fad5-48b0-a0f2-a5abefd9ba79\") " Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.727556 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de72cec0-fad5-48b0-a0f2-a5abefd9ba79-serving-cert\") pod \"de72cec0-fad5-48b0-a0f2-a5abefd9ba79\" (UID: \"de72cec0-fad5-48b0-a0f2-a5abefd9ba79\") " Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.727599 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de72cec0-fad5-48b0-a0f2-a5abefd9ba79-client-ca\") pod \"de72cec0-fad5-48b0-a0f2-a5abefd9ba79\" (UID: \"de72cec0-fad5-48b0-a0f2-a5abefd9ba79\") " Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.728040 4822 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f7081d5-2ccc-444e-9260-02627744ec2a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.728091 4822 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f7081d5-2ccc-444e-9260-02627744ec2a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.728115 4822 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f7081d5-2ccc-444e-9260-02627744ec2a-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.728138 4822 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f7081d5-2ccc-444e-9260-02627744ec2a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.728164 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgkjz\" (UniqueName: \"kubernetes.io/projected/1f7081d5-2ccc-444e-9260-02627744ec2a-kube-api-access-hgkjz\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.728293 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de72cec0-fad5-48b0-a0f2-a5abefd9ba79-config" (OuterVolumeSpecName: "config") pod "de72cec0-fad5-48b0-a0f2-a5abefd9ba79" (UID: "de72cec0-fad5-48b0-a0f2-a5abefd9ba79"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.728615 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de72cec0-fad5-48b0-a0f2-a5abefd9ba79-client-ca" (OuterVolumeSpecName: "client-ca") pod "de72cec0-fad5-48b0-a0f2-a5abefd9ba79" (UID: "de72cec0-fad5-48b0-a0f2-a5abefd9ba79"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.731796 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de72cec0-fad5-48b0-a0f2-a5abefd9ba79-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "de72cec0-fad5-48b0-a0f2-a5abefd9ba79" (UID: "de72cec0-fad5-48b0-a0f2-a5abefd9ba79"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.733491 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de72cec0-fad5-48b0-a0f2-a5abefd9ba79-kube-api-access-bs4jk" (OuterVolumeSpecName: "kube-api-access-bs4jk") pod "de72cec0-fad5-48b0-a0f2-a5abefd9ba79" (UID: "de72cec0-fad5-48b0-a0f2-a5abefd9ba79"). InnerVolumeSpecName "kube-api-access-bs4jk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.829169 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs4jk\" (UniqueName: \"kubernetes.io/projected/de72cec0-fad5-48b0-a0f2-a5abefd9ba79-kube-api-access-bs4jk\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.829220 4822 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de72cec0-fad5-48b0-a0f2-a5abefd9ba79-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.829241 4822 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de72cec0-fad5-48b0-a0f2-a5abefd9ba79-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.829259 4822 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de72cec0-fad5-48b0-a0f2-a5abefd9ba79-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.943827 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-868999fb6-zm7cz"] Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.949559 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-868999fb6-zm7cz"] Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.960781 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f7f6c779c-p8pj5"] Feb 24 09:12:51 crc kubenswrapper[4822]: I0224 09:12:51.967019 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f7f6c779c-p8pj5"] Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.191240 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5f584df4bb-47x28"] Feb 24 09:12:52 crc kubenswrapper[4822]: E0224 09:12:52.191685 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f7081d5-2ccc-444e-9260-02627744ec2a" containerName="controller-manager" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.191717 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f7081d5-2ccc-444e-9260-02627744ec2a" containerName="controller-manager" Feb 24 09:12:52 crc kubenswrapper[4822]: E0224 09:12:52.191743 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de72cec0-fad5-48b0-a0f2-a5abefd9ba79" containerName="route-controller-manager" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.191760 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="de72cec0-fad5-48b0-a0f2-a5abefd9ba79" containerName="route-controller-manager" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.192009 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f7081d5-2ccc-444e-9260-02627744ec2a" containerName="controller-manager" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.192045 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="de72cec0-fad5-48b0-a0f2-a5abefd9ba79" containerName="route-controller-manager" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.192645 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f584df4bb-47x28" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.200294 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.200410 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.200600 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.200972 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.201474 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.201701 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.202828 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d8b985885-4qql9"] Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.203942 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d8b985885-4qql9" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.214152 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.214678 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.214772 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.214690 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.216684 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f584df4bb-47x28"] Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.218092 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.218332 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.221661 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d8b985885-4qql9"] Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.226578 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.336399 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/223ddde5-3687-48fb-8439-ed3dffb4020c-config\") pod \"controller-manager-5f584df4bb-47x28\" (UID: \"223ddde5-3687-48fb-8439-ed3dffb4020c\") " pod="openshift-controller-manager/controller-manager-5f584df4bb-47x28" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.336459 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thjv4\" (UniqueName: \"kubernetes.io/projected/77162d2b-4877-44ed-a856-124557c3bbfd-kube-api-access-thjv4\") pod \"route-controller-manager-5d8b985885-4qql9\" (UID: \"77162d2b-4877-44ed-a856-124557c3bbfd\") " pod="openshift-route-controller-manager/route-controller-manager-5d8b985885-4qql9" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.336502 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77162d2b-4877-44ed-a856-124557c3bbfd-serving-cert\") pod \"route-controller-manager-5d8b985885-4qql9\" (UID: \"77162d2b-4877-44ed-a856-124557c3bbfd\") " pod="openshift-route-controller-manager/route-controller-manager-5d8b985885-4qql9" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.336605 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/223ddde5-3687-48fb-8439-ed3dffb4020c-serving-cert\") pod \"controller-manager-5f584df4bb-47x28\" (UID: \"223ddde5-3687-48fb-8439-ed3dffb4020c\") " pod="openshift-controller-manager/controller-manager-5f584df4bb-47x28" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.336639 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5nkj\" (UniqueName: \"kubernetes.io/projected/223ddde5-3687-48fb-8439-ed3dffb4020c-kube-api-access-g5nkj\") pod \"controller-manager-5f584df4bb-47x28\" (UID: \"223ddde5-3687-48fb-8439-ed3dffb4020c\") " pod="openshift-controller-manager/controller-manager-5f584df4bb-47x28" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.336660 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/223ddde5-3687-48fb-8439-ed3dffb4020c-client-ca\") pod \"controller-manager-5f584df4bb-47x28\" (UID: \"223ddde5-3687-48fb-8439-ed3dffb4020c\") " pod="openshift-controller-manager/controller-manager-5f584df4bb-47x28" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.336689 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/223ddde5-3687-48fb-8439-ed3dffb4020c-proxy-ca-bundles\") pod \"controller-manager-5f584df4bb-47x28\" (UID: \"223ddde5-3687-48fb-8439-ed3dffb4020c\") " pod="openshift-controller-manager/controller-manager-5f584df4bb-47x28" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.336726 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/77162d2b-4877-44ed-a856-124557c3bbfd-client-ca\") pod \"route-controller-manager-5d8b985885-4qql9\" (UID: \"77162d2b-4877-44ed-a856-124557c3bbfd\") " pod="openshift-route-controller-manager/route-controller-manager-5d8b985885-4qql9" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.336769 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77162d2b-4877-44ed-a856-124557c3bbfd-config\") pod \"route-controller-manager-5d8b985885-4qql9\" (UID: \"77162d2b-4877-44ed-a856-124557c3bbfd\") " pod="openshift-route-controller-manager/route-controller-manager-5d8b985885-4qql9" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.356022 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f7081d5-2ccc-444e-9260-02627744ec2a" path="/var/lib/kubelet/pods/1f7081d5-2ccc-444e-9260-02627744ec2a/volumes" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.357182 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de72cec0-fad5-48b0-a0f2-a5abefd9ba79" path="/var/lib/kubelet/pods/de72cec0-fad5-48b0-a0f2-a5abefd9ba79/volumes" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.438444 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77162d2b-4877-44ed-a856-124557c3bbfd-serving-cert\") pod \"route-controller-manager-5d8b985885-4qql9\" (UID: \"77162d2b-4877-44ed-a856-124557c3bbfd\") " pod="openshift-route-controller-manager/route-controller-manager-5d8b985885-4qql9" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.438503 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/223ddde5-3687-48fb-8439-ed3dffb4020c-serving-cert\") pod \"controller-manager-5f584df4bb-47x28\" (UID: \"223ddde5-3687-48fb-8439-ed3dffb4020c\") " pod="openshift-controller-manager/controller-manager-5f584df4bb-47x28" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.438536 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/223ddde5-3687-48fb-8439-ed3dffb4020c-client-ca\") pod \"controller-manager-5f584df4bb-47x28\" (UID: \"223ddde5-3687-48fb-8439-ed3dffb4020c\") " pod="openshift-controller-manager/controller-manager-5f584df4bb-47x28" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.438564 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5nkj\" (UniqueName: \"kubernetes.io/projected/223ddde5-3687-48fb-8439-ed3dffb4020c-kube-api-access-g5nkj\") pod \"controller-manager-5f584df4bb-47x28\" (UID: \"223ddde5-3687-48fb-8439-ed3dffb4020c\") " pod="openshift-controller-manager/controller-manager-5f584df4bb-47x28" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.438601 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/223ddde5-3687-48fb-8439-ed3dffb4020c-proxy-ca-bundles\") pod \"controller-manager-5f584df4bb-47x28\" (UID: \"223ddde5-3687-48fb-8439-ed3dffb4020c\") " pod="openshift-controller-manager/controller-manager-5f584df4bb-47x28" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.438634 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/77162d2b-4877-44ed-a856-124557c3bbfd-client-ca\") pod \"route-controller-manager-5d8b985885-4qql9\" (UID: \"77162d2b-4877-44ed-a856-124557c3bbfd\") " pod="openshift-route-controller-manager/route-controller-manager-5d8b985885-4qql9" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.438668 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77162d2b-4877-44ed-a856-124557c3bbfd-config\") pod \"route-controller-manager-5d8b985885-4qql9\" (UID: \"77162d2b-4877-44ed-a856-124557c3bbfd\") " pod="openshift-route-controller-manager/route-controller-manager-5d8b985885-4qql9" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.438690 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/223ddde5-3687-48fb-8439-ed3dffb4020c-config\") pod \"controller-manager-5f584df4bb-47x28\" (UID: \"223ddde5-3687-48fb-8439-ed3dffb4020c\") " pod="openshift-controller-manager/controller-manager-5f584df4bb-47x28" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.438711 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thjv4\" (UniqueName: \"kubernetes.io/projected/77162d2b-4877-44ed-a856-124557c3bbfd-kube-api-access-thjv4\") pod \"route-controller-manager-5d8b985885-4qql9\" (UID: \"77162d2b-4877-44ed-a856-124557c3bbfd\") " pod="openshift-route-controller-manager/route-controller-manager-5d8b985885-4qql9" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.441355 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/223ddde5-3687-48fb-8439-ed3dffb4020c-client-ca\") pod \"controller-manager-5f584df4bb-47x28\" (UID: \"223ddde5-3687-48fb-8439-ed3dffb4020c\") " pod="openshift-controller-manager/controller-manager-5f584df4bb-47x28" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.441734 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77162d2b-4877-44ed-a856-124557c3bbfd-config\") pod \"route-controller-manager-5d8b985885-4qql9\" (UID: \"77162d2b-4877-44ed-a856-124557c3bbfd\") " pod="openshift-route-controller-manager/route-controller-manager-5d8b985885-4qql9" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.443088 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/223ddde5-3687-48fb-8439-ed3dffb4020c-config\") pod \"controller-manager-5f584df4bb-47x28\" (UID: \"223ddde5-3687-48fb-8439-ed3dffb4020c\") " pod="openshift-controller-manager/controller-manager-5f584df4bb-47x28" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.443290 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/77162d2b-4877-44ed-a856-124557c3bbfd-client-ca\") pod \"route-controller-manager-5d8b985885-4qql9\" (UID: \"77162d2b-4877-44ed-a856-124557c3bbfd\") " pod="openshift-route-controller-manager/route-controller-manager-5d8b985885-4qql9" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.444408 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77162d2b-4877-44ed-a856-124557c3bbfd-serving-cert\") pod \"route-controller-manager-5d8b985885-4qql9\" (UID: \"77162d2b-4877-44ed-a856-124557c3bbfd\") " pod="openshift-route-controller-manager/route-controller-manager-5d8b985885-4qql9" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.444442 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/223ddde5-3687-48fb-8439-ed3dffb4020c-proxy-ca-bundles\") pod \"controller-manager-5f584df4bb-47x28\" (UID: \"223ddde5-3687-48fb-8439-ed3dffb4020c\") " pod="openshift-controller-manager/controller-manager-5f584df4bb-47x28" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.446216 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/223ddde5-3687-48fb-8439-ed3dffb4020c-serving-cert\") pod \"controller-manager-5f584df4bb-47x28\" (UID: \"223ddde5-3687-48fb-8439-ed3dffb4020c\") " pod="openshift-controller-manager/controller-manager-5f584df4bb-47x28" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.461870 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thjv4\" (UniqueName: \"kubernetes.io/projected/77162d2b-4877-44ed-a856-124557c3bbfd-kube-api-access-thjv4\") pod \"route-controller-manager-5d8b985885-4qql9\" (UID: \"77162d2b-4877-44ed-a856-124557c3bbfd\") " pod="openshift-route-controller-manager/route-controller-manager-5d8b985885-4qql9" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.471510 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5nkj\" (UniqueName: \"kubernetes.io/projected/223ddde5-3687-48fb-8439-ed3dffb4020c-kube-api-access-g5nkj\") pod \"controller-manager-5f584df4bb-47x28\" (UID: \"223ddde5-3687-48fb-8439-ed3dffb4020c\") " pod="openshift-controller-manager/controller-manager-5f584df4bb-47x28" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.528737 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f584df4bb-47x28" Feb 24 09:12:52 crc kubenswrapper[4822]: I0224 09:12:52.545516 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d8b985885-4qql9" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.000999 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f584df4bb-47x28"] Feb 24 09:12:53 crc kubenswrapper[4822]: W0224 09:12:53.007176 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod223ddde5_3687_48fb_8439_ed3dffb4020c.slice/crio-1113c09fd05671edffb8bd7a48cac53e88669f5da38d50b7ab58c7bf3ff1d1aa WatchSource:0}: Error finding container 1113c09fd05671edffb8bd7a48cac53e88669f5da38d50b7ab58c7bf3ff1d1aa: Status 404 returned error can't find the container with id 1113c09fd05671edffb8bd7a48cac53e88669f5da38d50b7ab58c7bf3ff1d1aa Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.054098 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d8b985885-4qql9"] Feb 24 09:12:53 crc kubenswrapper[4822]: W0224 09:12:53.061436 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77162d2b_4877_44ed_a856_124557c3bbfd.slice/crio-2e411888bbb0b3248b870d304fb053f16bd712fcef7f1642255f428d8fcc716f WatchSource:0}: Error finding container 2e411888bbb0b3248b870d304fb053f16bd712fcef7f1642255f428d8fcc716f: Status 404 returned error can't find the container with id 2e411888bbb0b3248b870d304fb053f16bd712fcef7f1642255f428d8fcc716f Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.614170 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f584df4bb-47x28" event={"ID":"223ddde5-3687-48fb-8439-ed3dffb4020c","Type":"ContainerStarted","Data":"1652b4b1f88a73c214d1df3a611032a1d74c28d4e489239acae1ef5f26813f2f"} Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.615002 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f584df4bb-47x28" event={"ID":"223ddde5-3687-48fb-8439-ed3dffb4020c","Type":"ContainerStarted","Data":"1113c09fd05671edffb8bd7a48cac53e88669f5da38d50b7ab58c7bf3ff1d1aa"} Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.615528 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5f584df4bb-47x28" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.615882 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d8b985885-4qql9" event={"ID":"77162d2b-4877-44ed-a856-124557c3bbfd","Type":"ContainerStarted","Data":"514e66f32a7fbdaccb12e233aed6b7963cb5d2e9f91a13f1d900e684d61339ef"} Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.615949 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d8b985885-4qql9" event={"ID":"77162d2b-4877-44ed-a856-124557c3bbfd","Type":"ContainerStarted","Data":"2e411888bbb0b3248b870d304fb053f16bd712fcef7f1642255f428d8fcc716f"} Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.616249 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5d8b985885-4qql9" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.619491 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5f584df4bb-47x28" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.628677 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5f584df4bb-47x28" podStartSLOduration=3.628668585 podStartE2EDuration="3.628668585s" podCreationTimestamp="2026-02-24 09:12:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:12:53.628255143 +0000 UTC m=+296.016017701" watchObservedRunningTime="2026-02-24 09:12:53.628668585 +0000 UTC m=+296.016431133" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.645307 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5d8b985885-4qql9" podStartSLOduration=2.6452877519999998 podStartE2EDuration="2.645287752s" podCreationTimestamp="2026-02-24 09:12:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:12:53.644190221 +0000 UTC m=+296.031952769" watchObservedRunningTime="2026-02-24 09:12:53.645287752 +0000 UTC m=+296.033050300" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.721319 4822 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.721611 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99" gracePeriod=15 Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.721683 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0" gracePeriod=15 Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.721715 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290" gracePeriod=15 Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.721877 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3" gracePeriod=15 Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.722450 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://708b7e3ba090c334bf7618665cd0f95d512059d204892851f4005e741b29aedf" gracePeriod=15 Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.723248 4822 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 24 09:12:53 crc kubenswrapper[4822]: E0224 09:12:53.723472 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.723486 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 09:12:53 crc kubenswrapper[4822]: E0224 09:12:53.723496 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.723504 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 09:12:53 crc kubenswrapper[4822]: E0224 09:12:53.723515 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.723523 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 09:12:53 crc kubenswrapper[4822]: E0224 09:12:53.723537 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.723545 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 24 09:12:53 crc kubenswrapper[4822]: E0224 09:12:53.723555 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.723564 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 24 09:12:53 crc kubenswrapper[4822]: E0224 09:12:53.723580 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.723588 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 09:12:53 crc kubenswrapper[4822]: E0224 09:12:53.723597 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.723607 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 24 09:12:53 crc kubenswrapper[4822]: E0224 09:12:53.723619 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.723627 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 24 09:12:53 crc kubenswrapper[4822]: E0224 09:12:53.723641 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.723649 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.723759 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.723769 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.723779 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.723789 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.723798 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.723808 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.723819 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.723834 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.723844 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 24 09:12:53 crc kubenswrapper[4822]: E0224 09:12:53.723984 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.723995 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.725314 4822 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.725977 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.731478 4822 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.855624 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.855692 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.855716 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.855737 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.855751 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.855779 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.855799 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.855817 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.957130 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.957179 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.957204 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.957220 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.957261 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.957287 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.957305 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.957337 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.957393 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.957425 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.957446 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.957465 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.957485 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.957504 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.957524 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:12:53 crc kubenswrapper[4822]: I0224 09:12:53.957546 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:12:54 crc kubenswrapper[4822]: I0224 09:12:54.618288 4822 patch_prober.go:28] interesting pod/route-controller-manager-5d8b985885-4qql9 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 09:12:54 crc kubenswrapper[4822]: I0224 09:12:54.618781 4822 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5d8b985885-4qql9" podUID="77162d2b-4877-44ed-a856-124557c3bbfd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 09:12:54 crc kubenswrapper[4822]: E0224 09:12:54.619778 4822 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/events\": dial tcp 38.102.83.164:6443: connect: connection refused" event=< Feb 24 09:12:54 crc kubenswrapper[4822]: &Event{ObjectMeta:{route-controller-manager-5d8b985885-4qql9.189723d4a601a580 openshift-route-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-route-controller-manager,Name:route-controller-manager-5d8b985885-4qql9,UID:77162d2b-4877-44ed-a856-124557c3bbfd,APIVersion:v1,ResourceVersion:29981,FieldPath:spec.containers{route-controller-manager},},Reason:ProbeError,Message:Readiness probe error: Get "https://10.217.0.67:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 24 09:12:54 crc kubenswrapper[4822]: body: Feb 24 09:12:54 crc kubenswrapper[4822]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:12:54.618744192 +0000 UTC m=+297.006506780,LastTimestamp:2026-02-24 09:12:54.618744192 +0000 UTC m=+297.006506780,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 24 09:12:54 crc kubenswrapper[4822]: > Feb 24 09:12:54 crc kubenswrapper[4822]: I0224 09:12:54.629304 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 24 09:12:54 crc kubenswrapper[4822]: I0224 09:12:54.631606 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 24 09:12:54 crc kubenswrapper[4822]: I0224 09:12:54.633169 4822 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="708b7e3ba090c334bf7618665cd0f95d512059d204892851f4005e741b29aedf" exitCode=0 Feb 24 09:12:54 crc kubenswrapper[4822]: I0224 09:12:54.633242 4822 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290" exitCode=0 Feb 24 09:12:54 crc kubenswrapper[4822]: I0224 09:12:54.633263 4822 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0" exitCode=0 Feb 24 09:12:54 crc kubenswrapper[4822]: I0224 09:12:54.633282 4822 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3" exitCode=2 Feb 24 09:12:54 crc kubenswrapper[4822]: I0224 09:12:54.633312 4822 scope.go:117] "RemoveContainer" containerID="e3ca3ebff369aebeb980fcfc236006d384e9be088b9f244da9debb446cc66344" Feb 24 09:12:54 crc kubenswrapper[4822]: I0224 09:12:54.637472 4822 generic.go:334] "Generic (PLEG): container finished" podID="7fe659d3-9cdb-43c9-8ce6-b5be02903c5e" containerID="6c51dd2f303375c4d761e28770a635e5f8aa091551ecd26ac1dcc27ad6d54e01" exitCode=0 Feb 24 09:12:54 crc kubenswrapper[4822]: I0224 09:12:54.637599 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7fe659d3-9cdb-43c9-8ce6-b5be02903c5e","Type":"ContainerDied","Data":"6c51dd2f303375c4d761e28770a635e5f8aa091551ecd26ac1dcc27ad6d54e01"} Feb 24 09:12:54 crc kubenswrapper[4822]: I0224 09:12:54.639551 4822 status_manager.go:851] "Failed to get status for pod" podUID="7fe659d3-9cdb-43c9-8ce6-b5be02903c5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 24 09:12:55 crc kubenswrapper[4822]: I0224 09:12:55.638522 4822 patch_prober.go:28] interesting pod/route-controller-manager-5d8b985885-4qql9 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 09:12:55 crc kubenswrapper[4822]: I0224 09:12:55.638603 4822 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5d8b985885-4qql9" podUID="77162d2b-4877-44ed-a856-124557c3bbfd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 09:12:55 crc kubenswrapper[4822]: I0224 09:12:55.650819 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.104485 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.105669 4822 status_manager.go:851] "Failed to get status for pod" podUID="7fe659d3-9cdb-43c9-8ce6-b5be02903c5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.111399 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.112552 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.113053 4822 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.113312 4822 status_manager.go:851] "Failed to get status for pod" podUID="7fe659d3-9cdb-43c9-8ce6-b5be02903c5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.290406 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.290498 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.291032 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fe659d3-9cdb-43c9-8ce6-b5be02903c5e-var-lock" (OuterVolumeSpecName: "var-lock") pod "7fe659d3-9cdb-43c9-8ce6-b5be02903c5e" (UID: "7fe659d3-9cdb-43c9-8ce6-b5be02903c5e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.290903 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7fe659d3-9cdb-43c9-8ce6-b5be02903c5e-var-lock\") pod \"7fe659d3-9cdb-43c9-8ce6-b5be02903c5e\" (UID: \"7fe659d3-9cdb-43c9-8ce6-b5be02903c5e\") " Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.291392 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7fe659d3-9cdb-43c9-8ce6-b5be02903c5e-kube-api-access\") pod \"7fe659d3-9cdb-43c9-8ce6-b5be02903c5e\" (UID: \"7fe659d3-9cdb-43c9-8ce6-b5be02903c5e\") " Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.291541 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7fe659d3-9cdb-43c9-8ce6-b5be02903c5e-kubelet-dir\") pod \"7fe659d3-9cdb-43c9-8ce6-b5be02903c5e\" (UID: \"7fe659d3-9cdb-43c9-8ce6-b5be02903c5e\") " Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.291671 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fe659d3-9cdb-43c9-8ce6-b5be02903c5e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7fe659d3-9cdb-43c9-8ce6-b5be02903c5e" (UID: "7fe659d3-9cdb-43c9-8ce6-b5be02903c5e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.291838 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.292038 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.291983 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.292141 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.292697 4822 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.292833 4822 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7fe659d3-9cdb-43c9-8ce6-b5be02903c5e-var-lock\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.292981 4822 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7fe659d3-9cdb-43c9-8ce6-b5be02903c5e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.293118 4822 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.293234 4822 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.299458 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fe659d3-9cdb-43c9-8ce6-b5be02903c5e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7fe659d3-9cdb-43c9-8ce6-b5be02903c5e" (UID: "7fe659d3-9cdb-43c9-8ce6-b5be02903c5e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.348876 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.394517 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7fe659d3-9cdb-43c9-8ce6-b5be02903c5e-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:56 crc kubenswrapper[4822]: E0224 09:12:56.497694 4822 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/events\": dial tcp 38.102.83.164:6443: connect: connection refused" event=< Feb 24 09:12:56 crc kubenswrapper[4822]: &Event{ObjectMeta:{route-controller-manager-5d8b985885-4qql9.189723d4a601a580 openshift-route-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-route-controller-manager,Name:route-controller-manager-5d8b985885-4qql9,UID:77162d2b-4877-44ed-a856-124557c3bbfd,APIVersion:v1,ResourceVersion:29981,FieldPath:spec.containers{route-controller-manager},},Reason:ProbeError,Message:Readiness probe error: Get "https://10.217.0.67:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 24 09:12:56 crc kubenswrapper[4822]: body: Feb 24 09:12:56 crc kubenswrapper[4822]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:12:54.618744192 +0000 UTC m=+297.006506780,LastTimestamp:2026-02-24 09:12:54.618744192 +0000 UTC m=+297.006506780,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 24 09:12:56 crc kubenswrapper[4822]: > Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.666183 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.669063 4822 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99" exitCode=0 Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.669179 4822 scope.go:117] "RemoveContainer" containerID="708b7e3ba090c334bf7618665cd0f95d512059d204892851f4005e741b29aedf" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.669308 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.670429 4822 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.671263 4822 status_manager.go:851] "Failed to get status for pod" podUID="7fe659d3-9cdb-43c9-8ce6-b5be02903c5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.672149 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7fe659d3-9cdb-43c9-8ce6-b5be02903c5e","Type":"ContainerDied","Data":"9d43b3a9ef8e0f440aeb1f9234793d621641f4ac2a5062e6304865c628d6644c"} Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.672261 4822 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d43b3a9ef8e0f440aeb1f9234793d621641f4ac2a5062e6304865c628d6644c" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.672326 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.676850 4822 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.677252 4822 status_manager.go:851] "Failed to get status for pod" podUID="7fe659d3-9cdb-43c9-8ce6-b5be02903c5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.684380 4822 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.684697 4822 status_manager.go:851] "Failed to get status for pod" podUID="7fe659d3-9cdb-43c9-8ce6-b5be02903c5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.697692 4822 scope.go:117] "RemoveContainer" containerID="61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.723523 4822 scope.go:117] "RemoveContainer" containerID="0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.748239 4822 scope.go:117] "RemoveContainer" containerID="2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.772829 4822 scope.go:117] "RemoveContainer" containerID="a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.803740 4822 scope.go:117] "RemoveContainer" containerID="09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.846847 4822 scope.go:117] "RemoveContainer" containerID="708b7e3ba090c334bf7618665cd0f95d512059d204892851f4005e741b29aedf" Feb 24 09:12:56 crc kubenswrapper[4822]: E0224 09:12:56.847577 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"708b7e3ba090c334bf7618665cd0f95d512059d204892851f4005e741b29aedf\": container with ID starting with 708b7e3ba090c334bf7618665cd0f95d512059d204892851f4005e741b29aedf not found: ID does not exist" containerID="708b7e3ba090c334bf7618665cd0f95d512059d204892851f4005e741b29aedf" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.847618 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"708b7e3ba090c334bf7618665cd0f95d512059d204892851f4005e741b29aedf"} err="failed to get container status \"708b7e3ba090c334bf7618665cd0f95d512059d204892851f4005e741b29aedf\": rpc error: code = NotFound desc = could not find container \"708b7e3ba090c334bf7618665cd0f95d512059d204892851f4005e741b29aedf\": container with ID starting with 708b7e3ba090c334bf7618665cd0f95d512059d204892851f4005e741b29aedf not found: ID does not exist" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.847648 4822 scope.go:117] "RemoveContainer" containerID="61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290" Feb 24 09:12:56 crc kubenswrapper[4822]: E0224 09:12:56.849226 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\": container with ID starting with 61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290 not found: ID does not exist" containerID="61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.849293 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290"} err="failed to get container status \"61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\": rpc error: code = NotFound desc = could not find container \"61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290\": container with ID starting with 61d9eacd534cfdfeb0c220d5b4544a53edbed6fd2fc383a4b6bd4ae94b2b0290 not found: ID does not exist" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.849333 4822 scope.go:117] "RemoveContainer" containerID="0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0" Feb 24 09:12:56 crc kubenswrapper[4822]: E0224 09:12:56.849893 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\": container with ID starting with 0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0 not found: ID does not exist" containerID="0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.849970 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0"} err="failed to get container status \"0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\": rpc error: code = NotFound desc = could not find container \"0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0\": container with ID starting with 0bee1aca2add15e7fe314ecd9767f9559711eb31f6f3a4468c45734863607ae0 not found: ID does not exist" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.849997 4822 scope.go:117] "RemoveContainer" containerID="2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3" Feb 24 09:12:56 crc kubenswrapper[4822]: E0224 09:12:56.850320 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\": container with ID starting with 2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3 not found: ID does not exist" containerID="2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.850363 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3"} err="failed to get container status \"2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\": rpc error: code = NotFound desc = could not find container \"2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3\": container with ID starting with 2423defc386f4ba254a362261b73e77f5e21ea2961b4f5002a1513a8e9d507d3 not found: ID does not exist" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.850389 4822 scope.go:117] "RemoveContainer" containerID="a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99" Feb 24 09:12:56 crc kubenswrapper[4822]: E0224 09:12:56.850725 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\": container with ID starting with a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99 not found: ID does not exist" containerID="a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.850752 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99"} err="failed to get container status \"a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\": rpc error: code = NotFound desc = could not find container \"a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99\": container with ID starting with a372c2d22ab5dddbbf72d43098047c6f53dd8fa38f0afae0da52351b83fdec99 not found: ID does not exist" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.850770 4822 scope.go:117] "RemoveContainer" containerID="09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c" Feb 24 09:12:56 crc kubenswrapper[4822]: E0224 09:12:56.851228 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\": container with ID starting with 09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c not found: ID does not exist" containerID="09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c" Feb 24 09:12:56 crc kubenswrapper[4822]: I0224 09:12:56.851266 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c"} err="failed to get container status \"09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\": rpc error: code = NotFound desc = could not find container \"09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c\": container with ID starting with 09c5a88ede90266579fce160de91e8c6a75ae72ca716343cbbdc516a236b674c not found: ID does not exist" Feb 24 09:12:58 crc kubenswrapper[4822]: I0224 09:12:58.341721 4822 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 24 09:12:58 crc kubenswrapper[4822]: I0224 09:12:58.342156 4822 status_manager.go:851] "Failed to get status for pod" podUID="7fe659d3-9cdb-43c9-8ce6-b5be02903c5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 24 09:12:58 crc kubenswrapper[4822]: E0224 09:12:58.763719 4822 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.164:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 09:12:58 crc kubenswrapper[4822]: I0224 09:12:58.764318 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 09:12:59 crc kubenswrapper[4822]: I0224 09:12:59.699472 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"c2dd4d767c2650ac8c541b45e13509661385057ef7e3e4ccf9542188a0fb437c"} Feb 24 09:12:59 crc kubenswrapper[4822]: I0224 09:12:59.700243 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"5f99dba521dadbfb522794c59922e00f012daff045920a1eeae39ff71e44337e"} Feb 24 09:12:59 crc kubenswrapper[4822]: E0224 09:12:59.701021 4822 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.164:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 09:12:59 crc kubenswrapper[4822]: I0224 09:12:59.701120 4822 status_manager.go:851] "Failed to get status for pod" podUID="7fe659d3-9cdb-43c9-8ce6-b5be02903c5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 24 09:13:02 crc kubenswrapper[4822]: E0224 09:13:02.806158 4822 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 24 09:13:02 crc kubenswrapper[4822]: E0224 09:13:02.807259 4822 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 24 09:13:02 crc kubenswrapper[4822]: E0224 09:13:02.807513 4822 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 24 09:13:02 crc kubenswrapper[4822]: E0224 09:13:02.807733 4822 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 24 09:13:02 crc kubenswrapper[4822]: E0224 09:13:02.808107 4822 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 24 09:13:02 crc kubenswrapper[4822]: I0224 09:13:02.808359 4822 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 24 09:13:02 crc kubenswrapper[4822]: E0224 09:13:02.808905 4822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="200ms" Feb 24 09:13:03 crc kubenswrapper[4822]: E0224 09:13:03.010416 4822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="400ms" Feb 24 09:13:03 crc kubenswrapper[4822]: E0224 09:13:03.394390 4822 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.164:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" volumeName="registry-storage" Feb 24 09:13:03 crc kubenswrapper[4822]: E0224 09:13:03.411520 4822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="800ms" Feb 24 09:13:03 crc kubenswrapper[4822]: I0224 09:13:03.547788 4822 patch_prober.go:28] interesting pod/route-controller-manager-5d8b985885-4qql9 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 09:13:03 crc kubenswrapper[4822]: I0224 09:13:03.547869 4822 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5d8b985885-4qql9" podUID="77162d2b-4877-44ed-a856-124557c3bbfd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 09:13:04 crc kubenswrapper[4822]: E0224 09:13:04.212560 4822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="1.6s" Feb 24 09:13:05 crc kubenswrapper[4822]: I0224 09:13:05.336585 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:13:05 crc kubenswrapper[4822]: I0224 09:13:05.337438 4822 status_manager.go:851] "Failed to get status for pod" podUID="7fe659d3-9cdb-43c9-8ce6-b5be02903c5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 24 09:13:05 crc kubenswrapper[4822]: I0224 09:13:05.362659 4822 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4c2f6d18-d7b6-4c05-a90c-e6bf83a58862" Feb 24 09:13:05 crc kubenswrapper[4822]: I0224 09:13:05.362709 4822 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4c2f6d18-d7b6-4c05-a90c-e6bf83a58862" Feb 24 09:13:05 crc kubenswrapper[4822]: E0224 09:13:05.363347 4822 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:13:05 crc kubenswrapper[4822]: I0224 09:13:05.364181 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:13:05 crc kubenswrapper[4822]: W0224 09:13:05.411480 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-32c025726bf4781cef096344b3c73647be859b5cb5562f2fcc46e6ca9913fe5f WatchSource:0}: Error finding container 32c025726bf4781cef096344b3c73647be859b5cb5562f2fcc46e6ca9913fe5f: Status 404 returned error can't find the container with id 32c025726bf4781cef096344b3c73647be859b5cb5562f2fcc46e6ca9913fe5f Feb 24 09:13:05 crc kubenswrapper[4822]: I0224 09:13:05.745013 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8ec02c926d36634ba21bb66a91a3a4a8aa6ee4358cb3139382cb14b15dacd786"} Feb 24 09:13:05 crc kubenswrapper[4822]: I0224 09:13:05.745484 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"32c025726bf4781cef096344b3c73647be859b5cb5562f2fcc46e6ca9913fe5f"} Feb 24 09:13:05 crc kubenswrapper[4822]: I0224 09:13:05.746214 4822 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4c2f6d18-d7b6-4c05-a90c-e6bf83a58862" Feb 24 09:13:05 crc kubenswrapper[4822]: I0224 09:13:05.746252 4822 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4c2f6d18-d7b6-4c05-a90c-e6bf83a58862" Feb 24 09:13:05 crc kubenswrapper[4822]: I0224 09:13:05.746480 4822 status_manager.go:851] "Failed to get status for pod" podUID="7fe659d3-9cdb-43c9-8ce6-b5be02903c5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 24 09:13:05 crc kubenswrapper[4822]: E0224 09:13:05.746765 4822 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:13:05 crc kubenswrapper[4822]: E0224 09:13:05.813504 4822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="3.2s" Feb 24 09:13:06 crc kubenswrapper[4822]: E0224 09:13:06.499481 4822 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/events\": dial tcp 38.102.83.164:6443: connect: connection refused" event=< Feb 24 09:13:06 crc kubenswrapper[4822]: &Event{ObjectMeta:{route-controller-manager-5d8b985885-4qql9.189723d4a601a580 openshift-route-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-route-controller-manager,Name:route-controller-manager-5d8b985885-4qql9,UID:77162d2b-4877-44ed-a856-124557c3bbfd,APIVersion:v1,ResourceVersion:29981,FieldPath:spec.containers{route-controller-manager},},Reason:ProbeError,Message:Readiness probe error: Get "https://10.217.0.67:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 24 09:13:06 crc kubenswrapper[4822]: body: Feb 24 09:13:06 crc kubenswrapper[4822]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:12:54.618744192 +0000 UTC m=+297.006506780,LastTimestamp:2026-02-24 09:12:54.618744192 +0000 UTC m=+297.006506780,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 24 09:13:06 crc kubenswrapper[4822]: > Feb 24 09:13:06 crc kubenswrapper[4822]: I0224 09:13:06.754449 4822 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="8ec02c926d36634ba21bb66a91a3a4a8aa6ee4358cb3139382cb14b15dacd786" exitCode=0 Feb 24 09:13:06 crc kubenswrapper[4822]: I0224 09:13:06.754521 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"8ec02c926d36634ba21bb66a91a3a4a8aa6ee4358cb3139382cb14b15dacd786"} Feb 24 09:13:06 crc kubenswrapper[4822]: I0224 09:13:06.755063 4822 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4c2f6d18-d7b6-4c05-a90c-e6bf83a58862" Feb 24 09:13:06 crc kubenswrapper[4822]: I0224 09:13:06.755097 4822 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4c2f6d18-d7b6-4c05-a90c-e6bf83a58862" Feb 24 09:13:06 crc kubenswrapper[4822]: I0224 09:13:06.756113 4822 status_manager.go:851] "Failed to get status for pod" podUID="7fe659d3-9cdb-43c9-8ce6-b5be02903c5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 24 09:13:06 crc kubenswrapper[4822]: E0224 09:13:06.756378 4822 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:13:07 crc kubenswrapper[4822]: I0224 09:13:07.773477 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 24 09:13:07 crc kubenswrapper[4822]: I0224 09:13:07.774340 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 24 09:13:07 crc kubenswrapper[4822]: I0224 09:13:07.774417 4822 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="2fd8556c3d55934657eb33e924c817164ec4c4ee3daf720283bd5d5a5eb849d2" exitCode=1 Feb 24 09:13:07 crc kubenswrapper[4822]: I0224 09:13:07.774521 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"2fd8556c3d55934657eb33e924c817164ec4c4ee3daf720283bd5d5a5eb849d2"} Feb 24 09:13:07 crc kubenswrapper[4822]: I0224 09:13:07.775206 4822 scope.go:117] "RemoveContainer" containerID="2fd8556c3d55934657eb33e924c817164ec4c4ee3daf720283bd5d5a5eb849d2" Feb 24 09:13:07 crc kubenswrapper[4822]: I0224 09:13:07.778350 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f7917f9690bd8d51fe8c7b8259cb11113d64efaafc9d76ae88284e143feb379f"} Feb 24 09:13:07 crc kubenswrapper[4822]: I0224 09:13:07.778390 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d343c7c0120621b701df7e72a7a92af75b95b9d89f0e428d558d653fa6a993be"} Feb 24 09:13:07 crc kubenswrapper[4822]: I0224 09:13:07.778456 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c32fa77ffb260ecfdfaeab07cba63e92f179fbe585d68e2953edf7555f72e3a2"} Feb 24 09:13:08 crc kubenswrapper[4822]: I0224 09:13:08.786790 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 24 09:13:08 crc kubenswrapper[4822]: I0224 09:13:08.787966 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 24 09:13:08 crc kubenswrapper[4822]: I0224 09:13:08.788126 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5b8213f0280ad6b66f36acb724a9c389911c49a242e406e0c77ce6aba556ee26"} Feb 24 09:13:08 crc kubenswrapper[4822]: I0224 09:13:08.790946 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"50ea8d5fe07eb92a18d5e8d0c687225feb54397c3df68dad048e164cbb5f1401"} Feb 24 09:13:08 crc kubenswrapper[4822]: I0224 09:13:08.790985 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3c5c4cd9293fca1a5770e739ba69f3fdf3e62c2825ce7859eb2e6bd17e19a0d5"} Feb 24 09:13:08 crc kubenswrapper[4822]: I0224 09:13:08.791242 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:13:08 crc kubenswrapper[4822]: I0224 09:13:08.791367 4822 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4c2f6d18-d7b6-4c05-a90c-e6bf83a58862" Feb 24 09:13:08 crc kubenswrapper[4822]: I0224 09:13:08.791472 4822 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4c2f6d18-d7b6-4c05-a90c-e6bf83a58862" Feb 24 09:13:10 crc kubenswrapper[4822]: I0224 09:13:10.364848 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:13:10 crc kubenswrapper[4822]: I0224 09:13:10.364908 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:13:10 crc kubenswrapper[4822]: I0224 09:13:10.375356 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:13:12 crc kubenswrapper[4822]: I0224 09:13:12.806847 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:13:13 crc kubenswrapper[4822]: I0224 09:13:13.263525 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:13:13 crc kubenswrapper[4822]: I0224 09:13:13.270527 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:13:13 crc kubenswrapper[4822]: I0224 09:13:13.547769 4822 patch_prober.go:28] interesting pod/route-controller-manager-5d8b985885-4qql9 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 09:13:13 crc kubenswrapper[4822]: I0224 09:13:13.548328 4822 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5d8b985885-4qql9" podUID="77162d2b-4877-44ed-a856-124557c3bbfd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 09:13:13 crc kubenswrapper[4822]: I0224 09:13:13.806533 4822 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:13:14 crc kubenswrapper[4822]: I0224 09:13:14.830428 4822 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4c2f6d18-d7b6-4c05-a90c-e6bf83a58862" Feb 24 09:13:14 crc kubenswrapper[4822]: I0224 09:13:14.830484 4822 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4c2f6d18-d7b6-4c05-a90c-e6bf83a58862" Feb 24 09:13:14 crc kubenswrapper[4822]: I0224 09:13:14.837450 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:13:14 crc kubenswrapper[4822]: I0224 09:13:14.841337 4822 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="12bce363-2c7e-4e27-98e6-68f25b7820b8" Feb 24 09:13:15 crc kubenswrapper[4822]: I0224 09:13:15.837227 4822 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4c2f6d18-d7b6-4c05-a90c-e6bf83a58862" Feb 24 09:13:15 crc kubenswrapper[4822]: I0224 09:13:15.837571 4822 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4c2f6d18-d7b6-4c05-a90c-e6bf83a58862" Feb 24 09:13:18 crc kubenswrapper[4822]: I0224 09:13:18.352471 4822 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="12bce363-2c7e-4e27-98e6-68f25b7820b8" Feb 24 09:13:22 crc kubenswrapper[4822]: I0224 09:13:22.812772 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:13:23 crc kubenswrapper[4822]: I0224 09:13:23.331839 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 24 09:13:23 crc kubenswrapper[4822]: I0224 09:13:23.547229 4822 patch_prober.go:28] interesting pod/route-controller-manager-5d8b985885-4qql9 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 09:13:23 crc kubenswrapper[4822]: I0224 09:13:23.547357 4822 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5d8b985885-4qql9" podUID="77162d2b-4877-44ed-a856-124557c3bbfd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 09:13:23 crc kubenswrapper[4822]: I0224 09:13:23.905647 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-5d8b985885-4qql9_77162d2b-4877-44ed-a856-124557c3bbfd/route-controller-manager/0.log" Feb 24 09:13:23 crc kubenswrapper[4822]: I0224 09:13:23.905732 4822 generic.go:334] "Generic (PLEG): container finished" podID="77162d2b-4877-44ed-a856-124557c3bbfd" containerID="514e66f32a7fbdaccb12e233aed6b7963cb5d2e9f91a13f1d900e684d61339ef" exitCode=255 Feb 24 09:13:23 crc kubenswrapper[4822]: I0224 09:13:23.905774 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d8b985885-4qql9" event={"ID":"77162d2b-4877-44ed-a856-124557c3bbfd","Type":"ContainerDied","Data":"514e66f32a7fbdaccb12e233aed6b7963cb5d2e9f91a13f1d900e684d61339ef"} Feb 24 09:13:23 crc kubenswrapper[4822]: I0224 09:13:23.906445 4822 scope.go:117] "RemoveContainer" containerID="514e66f32a7fbdaccb12e233aed6b7963cb5d2e9f91a13f1d900e684d61339ef" Feb 24 09:13:24 crc kubenswrapper[4822]: I0224 09:13:24.918163 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-5d8b985885-4qql9_77162d2b-4877-44ed-a856-124557c3bbfd/route-controller-manager/0.log" Feb 24 09:13:24 crc kubenswrapper[4822]: I0224 09:13:24.918694 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d8b985885-4qql9" event={"ID":"77162d2b-4877-44ed-a856-124557c3bbfd","Type":"ContainerStarted","Data":"494e005ed0fb20ebb782466c3ea29707b4e418f2ca9f85fcd51b81d01455f7ad"} Feb 24 09:13:24 crc kubenswrapper[4822]: I0224 09:13:24.919679 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5d8b985885-4qql9" Feb 24 09:13:24 crc kubenswrapper[4822]: I0224 09:13:24.926409 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 24 09:13:25 crc kubenswrapper[4822]: I0224 09:13:25.264292 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 24 09:13:25 crc kubenswrapper[4822]: I0224 09:13:25.489090 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 24 09:13:25 crc kubenswrapper[4822]: I0224 09:13:25.625166 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 24 09:13:25 crc kubenswrapper[4822]: I0224 09:13:25.735304 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 24 09:13:25 crc kubenswrapper[4822]: I0224 09:13:25.740320 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 24 09:13:25 crc kubenswrapper[4822]: I0224 09:13:25.759387 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5d8b985885-4qql9" Feb 24 09:13:26 crc kubenswrapper[4822]: I0224 09:13:26.387542 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 24 09:13:26 crc kubenswrapper[4822]: I0224 09:13:26.691494 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 24 09:13:26 crc kubenswrapper[4822]: I0224 09:13:26.793757 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 24 09:13:26 crc kubenswrapper[4822]: I0224 09:13:26.836764 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 24 09:13:26 crc kubenswrapper[4822]: I0224 09:13:26.881422 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 24 09:13:26 crc kubenswrapper[4822]: I0224 09:13:26.950507 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 24 09:13:27 crc kubenswrapper[4822]: I0224 09:13:27.026217 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 24 09:13:27 crc kubenswrapper[4822]: I0224 09:13:27.103370 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 24 09:13:27 crc kubenswrapper[4822]: I0224 09:13:27.146224 4822 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 24 09:13:27 crc kubenswrapper[4822]: I0224 09:13:27.154110 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 24 09:13:27 crc kubenswrapper[4822]: I0224 09:13:27.154195 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 24 09:13:27 crc kubenswrapper[4822]: I0224 09:13:27.156472 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 24 09:13:27 crc kubenswrapper[4822]: I0224 09:13:27.161384 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:13:27 crc kubenswrapper[4822]: I0224 09:13:27.190136 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=14.190108949 podStartE2EDuration="14.190108949s" podCreationTimestamp="2026-02-24 09:13:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:13:27.181960957 +0000 UTC m=+329.569723535" watchObservedRunningTime="2026-02-24 09:13:27.190108949 +0000 UTC m=+329.577871527" Feb 24 09:13:27 crc kubenswrapper[4822]: I0224 09:13:27.229872 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 24 09:13:27 crc kubenswrapper[4822]: I0224 09:13:27.623111 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 09:13:27 crc kubenswrapper[4822]: I0224 09:13:27.750543 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 24 09:13:27 crc kubenswrapper[4822]: I0224 09:13:27.812104 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 24 09:13:27 crc kubenswrapper[4822]: I0224 09:13:27.926806 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 24 09:13:27 crc kubenswrapper[4822]: I0224 09:13:27.970239 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 24 09:13:28 crc kubenswrapper[4822]: I0224 09:13:28.016653 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47006: no serving certificate available for the kubelet" Feb 24 09:13:28 crc kubenswrapper[4822]: I0224 09:13:28.099886 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 24 09:13:28 crc kubenswrapper[4822]: I0224 09:13:28.111933 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 24 09:13:28 crc kubenswrapper[4822]: I0224 09:13:28.125338 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 24 09:13:28 crc kubenswrapper[4822]: I0224 09:13:28.267306 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 24 09:13:28 crc kubenswrapper[4822]: I0224 09:13:28.359407 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 24 09:13:28 crc kubenswrapper[4822]: I0224 09:13:28.373351 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 24 09:13:28 crc kubenswrapper[4822]: I0224 09:13:28.415617 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 24 09:13:28 crc kubenswrapper[4822]: I0224 09:13:28.453139 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 24 09:13:28 crc kubenswrapper[4822]: I0224 09:13:28.460734 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 24 09:13:28 crc kubenswrapper[4822]: I0224 09:13:28.490115 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 24 09:13:28 crc kubenswrapper[4822]: I0224 09:13:28.529450 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 24 09:13:28 crc kubenswrapper[4822]: I0224 09:13:28.536617 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 24 09:13:28 crc kubenswrapper[4822]: I0224 09:13:28.568809 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 09:13:28 crc kubenswrapper[4822]: I0224 09:13:28.596257 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 24 09:13:28 crc kubenswrapper[4822]: I0224 09:13:28.613374 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 24 09:13:28 crc kubenswrapper[4822]: I0224 09:13:28.614469 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 24 09:13:28 crc kubenswrapper[4822]: I0224 09:13:28.668058 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 24 09:13:28 crc kubenswrapper[4822]: I0224 09:13:28.750204 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 24 09:13:28 crc kubenswrapper[4822]: I0224 09:13:28.774237 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 24 09:13:28 crc kubenswrapper[4822]: I0224 09:13:28.889312 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 24 09:13:28 crc kubenswrapper[4822]: I0224 09:13:28.936154 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 24 09:13:28 crc kubenswrapper[4822]: I0224 09:13:28.937779 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 24 09:13:29 crc kubenswrapper[4822]: I0224 09:13:29.090717 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 24 09:13:29 crc kubenswrapper[4822]: I0224 09:13:29.123142 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 24 09:13:29 crc kubenswrapper[4822]: I0224 09:13:29.151569 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 24 09:13:29 crc kubenswrapper[4822]: I0224 09:13:29.183353 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 24 09:13:29 crc kubenswrapper[4822]: I0224 09:13:29.187147 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 24 09:13:29 crc kubenswrapper[4822]: I0224 09:13:29.190515 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 09:13:29 crc kubenswrapper[4822]: I0224 09:13:29.193621 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 24 09:13:29 crc kubenswrapper[4822]: I0224 09:13:29.217958 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 24 09:13:29 crc kubenswrapper[4822]: I0224 09:13:29.227397 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 24 09:13:29 crc kubenswrapper[4822]: I0224 09:13:29.350498 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 24 09:13:29 crc kubenswrapper[4822]: I0224 09:13:29.477874 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 24 09:13:29 crc kubenswrapper[4822]: I0224 09:13:29.504007 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 24 09:13:29 crc kubenswrapper[4822]: I0224 09:13:29.511652 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 24 09:13:29 crc kubenswrapper[4822]: I0224 09:13:29.701579 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 24 09:13:29 crc kubenswrapper[4822]: I0224 09:13:29.782225 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 24 09:13:29 crc kubenswrapper[4822]: I0224 09:13:29.856802 4822 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 24 09:13:29 crc kubenswrapper[4822]: I0224 09:13:29.879160 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 24 09:13:29 crc kubenswrapper[4822]: I0224 09:13:29.890851 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 24 09:13:29 crc kubenswrapper[4822]: I0224 09:13:29.926897 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 24 09:13:29 crc kubenswrapper[4822]: I0224 09:13:29.939229 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 24 09:13:29 crc kubenswrapper[4822]: I0224 09:13:29.964996 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 24 09:13:29 crc kubenswrapper[4822]: I0224 09:13:29.977962 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 24 09:13:30 crc kubenswrapper[4822]: I0224 09:13:30.044472 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 24 09:13:30 crc kubenswrapper[4822]: I0224 09:13:30.139244 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 24 09:13:30 crc kubenswrapper[4822]: I0224 09:13:30.177192 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 24 09:13:30 crc kubenswrapper[4822]: I0224 09:13:30.197274 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 24 09:13:30 crc kubenswrapper[4822]: I0224 09:13:30.250371 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 24 09:13:30 crc kubenswrapper[4822]: I0224 09:13:30.341473 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 24 09:13:30 crc kubenswrapper[4822]: I0224 09:13:30.355567 4822 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 24 09:13:30 crc kubenswrapper[4822]: I0224 09:13:30.363716 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 24 09:13:30 crc kubenswrapper[4822]: I0224 09:13:30.373903 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 24 09:13:30 crc kubenswrapper[4822]: I0224 09:13:30.418699 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 24 09:13:30 crc kubenswrapper[4822]: I0224 09:13:30.453279 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 24 09:13:30 crc kubenswrapper[4822]: I0224 09:13:30.463569 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 24 09:13:30 crc kubenswrapper[4822]: I0224 09:13:30.502307 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 24 09:13:30 crc kubenswrapper[4822]: I0224 09:13:30.504245 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 24 09:13:30 crc kubenswrapper[4822]: I0224 09:13:30.564561 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 24 09:13:30 crc kubenswrapper[4822]: I0224 09:13:30.605061 4822 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 24 09:13:30 crc kubenswrapper[4822]: I0224 09:13:30.617122 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 24 09:13:30 crc kubenswrapper[4822]: I0224 09:13:30.644745 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 24 09:13:30 crc kubenswrapper[4822]: I0224 09:13:30.680232 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 24 09:13:30 crc kubenswrapper[4822]: I0224 09:13:30.719727 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 24 09:13:30 crc kubenswrapper[4822]: I0224 09:13:30.847225 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 24 09:13:30 crc kubenswrapper[4822]: I0224 09:13:30.895042 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 24 09:13:30 crc kubenswrapper[4822]: I0224 09:13:30.922796 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 24 09:13:30 crc kubenswrapper[4822]: I0224 09:13:30.987841 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 24 09:13:31 crc kubenswrapper[4822]: I0224 09:13:31.008686 4822 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 24 09:13:31 crc kubenswrapper[4822]: I0224 09:13:31.023696 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 24 09:13:31 crc kubenswrapper[4822]: I0224 09:13:31.123284 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 24 09:13:31 crc kubenswrapper[4822]: I0224 09:13:31.311209 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 24 09:13:31 crc kubenswrapper[4822]: I0224 09:13:31.337291 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 24 09:13:31 crc kubenswrapper[4822]: I0224 09:13:31.371115 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 24 09:13:31 crc kubenswrapper[4822]: I0224 09:13:31.411465 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 24 09:13:31 crc kubenswrapper[4822]: I0224 09:13:31.419756 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 24 09:13:31 crc kubenswrapper[4822]: I0224 09:13:31.455141 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 24 09:13:31 crc kubenswrapper[4822]: I0224 09:13:31.484811 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 24 09:13:31 crc kubenswrapper[4822]: I0224 09:13:31.565797 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 24 09:13:31 crc kubenswrapper[4822]: I0224 09:13:31.684587 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 24 09:13:31 crc kubenswrapper[4822]: I0224 09:13:31.707535 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 24 09:13:31 crc kubenswrapper[4822]: I0224 09:13:31.746560 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 24 09:13:31 crc kubenswrapper[4822]: I0224 09:13:31.818338 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 24 09:13:31 crc kubenswrapper[4822]: I0224 09:13:31.829574 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 24 09:13:31 crc kubenswrapper[4822]: I0224 09:13:31.853442 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 24 09:13:31 crc kubenswrapper[4822]: I0224 09:13:31.874319 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 24 09:13:31 crc kubenswrapper[4822]: I0224 09:13:31.939611 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 24 09:13:32 crc kubenswrapper[4822]: I0224 09:13:32.047782 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 24 09:13:32 crc kubenswrapper[4822]: I0224 09:13:32.284444 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 24 09:13:32 crc kubenswrapper[4822]: I0224 09:13:32.513666 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 24 09:13:32 crc kubenswrapper[4822]: I0224 09:13:32.518322 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 24 09:13:32 crc kubenswrapper[4822]: I0224 09:13:32.540585 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 24 09:13:32 crc kubenswrapper[4822]: I0224 09:13:32.576687 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 24 09:13:32 crc kubenswrapper[4822]: I0224 09:13:32.617268 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 24 09:13:32 crc kubenswrapper[4822]: I0224 09:13:32.656389 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 24 09:13:32 crc kubenswrapper[4822]: I0224 09:13:32.703592 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 24 09:13:32 crc kubenswrapper[4822]: I0224 09:13:32.742986 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 24 09:13:32 crc kubenswrapper[4822]: I0224 09:13:32.813296 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 24 09:13:32 crc kubenswrapper[4822]: I0224 09:13:32.821632 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 24 09:13:32 crc kubenswrapper[4822]: I0224 09:13:32.826650 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 24 09:13:32 crc kubenswrapper[4822]: I0224 09:13:32.919771 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 24 09:13:32 crc kubenswrapper[4822]: I0224 09:13:32.957230 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 24 09:13:32 crc kubenswrapper[4822]: I0224 09:13:32.999714 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 24 09:13:33 crc kubenswrapper[4822]: I0224 09:13:33.061220 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 24 09:13:33 crc kubenswrapper[4822]: I0224 09:13:33.102545 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 24 09:13:33 crc kubenswrapper[4822]: I0224 09:13:33.209872 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 24 09:13:33 crc kubenswrapper[4822]: I0224 09:13:33.231295 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 24 09:13:33 crc kubenswrapper[4822]: I0224 09:13:33.255734 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 24 09:13:33 crc kubenswrapper[4822]: I0224 09:13:33.323676 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 24 09:13:33 crc kubenswrapper[4822]: I0224 09:13:33.410763 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 24 09:13:33 crc kubenswrapper[4822]: I0224 09:13:33.484466 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 24 09:13:33 crc kubenswrapper[4822]: I0224 09:13:33.497309 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 24 09:13:33 crc kubenswrapper[4822]: I0224 09:13:33.570002 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 24 09:13:33 crc kubenswrapper[4822]: I0224 09:13:33.576933 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 24 09:13:33 crc kubenswrapper[4822]: I0224 09:13:33.593106 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 24 09:13:33 crc kubenswrapper[4822]: I0224 09:13:33.682164 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 24 09:13:33 crc kubenswrapper[4822]: I0224 09:13:33.879207 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 24 09:13:33 crc kubenswrapper[4822]: I0224 09:13:33.892589 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 09:13:33 crc kubenswrapper[4822]: I0224 09:13:33.910588 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 24 09:13:33 crc kubenswrapper[4822]: I0224 09:13:33.910600 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 24 09:13:33 crc kubenswrapper[4822]: I0224 09:13:33.925786 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 24 09:13:33 crc kubenswrapper[4822]: I0224 09:13:33.956989 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 24 09:13:34 crc kubenswrapper[4822]: I0224 09:13:34.030770 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 24 09:13:34 crc kubenswrapper[4822]: I0224 09:13:34.039435 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 24 09:13:34 crc kubenswrapper[4822]: I0224 09:13:34.043348 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 24 09:13:34 crc kubenswrapper[4822]: I0224 09:13:34.084556 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 24 09:13:34 crc kubenswrapper[4822]: I0224 09:13:34.097001 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 24 09:13:34 crc kubenswrapper[4822]: I0224 09:13:34.152303 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 24 09:13:34 crc kubenswrapper[4822]: I0224 09:13:34.188702 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 24 09:13:34 crc kubenswrapper[4822]: I0224 09:13:34.191753 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 24 09:13:34 crc kubenswrapper[4822]: I0224 09:13:34.227703 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 24 09:13:34 crc kubenswrapper[4822]: I0224 09:13:34.241561 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 24 09:13:34 crc kubenswrapper[4822]: I0224 09:13:34.256411 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 24 09:13:34 crc kubenswrapper[4822]: I0224 09:13:34.363735 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 24 09:13:34 crc kubenswrapper[4822]: I0224 09:13:34.401308 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 24 09:13:34 crc kubenswrapper[4822]: I0224 09:13:34.448215 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 24 09:13:34 crc kubenswrapper[4822]: I0224 09:13:34.609371 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 24 09:13:34 crc kubenswrapper[4822]: I0224 09:13:34.788434 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 24 09:13:34 crc kubenswrapper[4822]: I0224 09:13:34.790167 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 24 09:13:34 crc kubenswrapper[4822]: I0224 09:13:34.917824 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 24 09:13:34 crc kubenswrapper[4822]: I0224 09:13:34.950677 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 24 09:13:34 crc kubenswrapper[4822]: I0224 09:13:34.972093 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 24 09:13:35 crc kubenswrapper[4822]: I0224 09:13:35.097798 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 24 09:13:35 crc kubenswrapper[4822]: I0224 09:13:35.110477 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 24 09:13:35 crc kubenswrapper[4822]: I0224 09:13:35.123760 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 24 09:13:35 crc kubenswrapper[4822]: I0224 09:13:35.249755 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 24 09:13:35 crc kubenswrapper[4822]: I0224 09:13:35.303289 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 24 09:13:35 crc kubenswrapper[4822]: I0224 09:13:35.319644 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 24 09:13:35 crc kubenswrapper[4822]: I0224 09:13:35.323611 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 24 09:13:35 crc kubenswrapper[4822]: I0224 09:13:35.497385 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 24 09:13:35 crc kubenswrapper[4822]: I0224 09:13:35.513047 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 24 09:13:35 crc kubenswrapper[4822]: I0224 09:13:35.534153 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 24 09:13:35 crc kubenswrapper[4822]: I0224 09:13:35.594901 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 24 09:13:35 crc kubenswrapper[4822]: I0224 09:13:35.601242 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 24 09:13:35 crc kubenswrapper[4822]: I0224 09:13:35.685792 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 24 09:13:35 crc kubenswrapper[4822]: I0224 09:13:35.696468 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 24 09:13:35 crc kubenswrapper[4822]: I0224 09:13:35.697453 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 24 09:13:35 crc kubenswrapper[4822]: I0224 09:13:35.791254 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 24 09:13:35 crc kubenswrapper[4822]: I0224 09:13:35.827304 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 24 09:13:35 crc kubenswrapper[4822]: I0224 09:13:35.833957 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 24 09:13:35 crc kubenswrapper[4822]: I0224 09:13:35.891357 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 24 09:13:35 crc kubenswrapper[4822]: I0224 09:13:35.911449 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 24 09:13:35 crc kubenswrapper[4822]: I0224 09:13:35.983152 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 24 09:13:35 crc kubenswrapper[4822]: I0224 09:13:35.991003 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 24 09:13:36 crc kubenswrapper[4822]: I0224 09:13:36.019451 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 24 09:13:36 crc kubenswrapper[4822]: I0224 09:13:36.168416 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 24 09:13:36 crc kubenswrapper[4822]: I0224 09:13:36.201294 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 24 09:13:36 crc kubenswrapper[4822]: I0224 09:13:36.217033 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 24 09:13:36 crc kubenswrapper[4822]: I0224 09:13:36.360697 4822 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 24 09:13:36 crc kubenswrapper[4822]: I0224 09:13:36.361123 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://c2dd4d767c2650ac8c541b45e13509661385057ef7e3e4ccf9542188a0fb437c" gracePeriod=5 Feb 24 09:13:36 crc kubenswrapper[4822]: I0224 09:13:36.444826 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 24 09:13:36 crc kubenswrapper[4822]: I0224 09:13:36.511821 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 24 09:13:36 crc kubenswrapper[4822]: I0224 09:13:36.541460 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 24 09:13:36 crc kubenswrapper[4822]: I0224 09:13:36.559119 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 24 09:13:36 crc kubenswrapper[4822]: I0224 09:13:36.689286 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 24 09:13:36 crc kubenswrapper[4822]: I0224 09:13:36.696583 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 24 09:13:36 crc kubenswrapper[4822]: I0224 09:13:36.738349 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 24 09:13:36 crc kubenswrapper[4822]: I0224 09:13:36.838957 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 24 09:13:36 crc kubenswrapper[4822]: I0224 09:13:36.857668 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 24 09:13:36 crc kubenswrapper[4822]: I0224 09:13:36.871642 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 24 09:13:36 crc kubenswrapper[4822]: I0224 09:13:36.889588 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 24 09:13:36 crc kubenswrapper[4822]: I0224 09:13:36.917843 4822 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 24 09:13:36 crc kubenswrapper[4822]: I0224 09:13:36.967365 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 24 09:13:36 crc kubenswrapper[4822]: I0224 09:13:36.982095 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 24 09:13:37 crc kubenswrapper[4822]: I0224 09:13:37.072324 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 24 09:13:37 crc kubenswrapper[4822]: I0224 09:13:37.336701 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 24 09:13:37 crc kubenswrapper[4822]: I0224 09:13:37.446954 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 09:13:37 crc kubenswrapper[4822]: I0224 09:13:37.587392 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 24 09:13:37 crc kubenswrapper[4822]: I0224 09:13:37.587582 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 24 09:13:37 crc kubenswrapper[4822]: I0224 09:13:37.649625 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 24 09:13:37 crc kubenswrapper[4822]: I0224 09:13:37.688443 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 24 09:13:37 crc kubenswrapper[4822]: I0224 09:13:37.827654 4822 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 24 09:13:37 crc kubenswrapper[4822]: I0224 09:13:37.843685 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 24 09:13:37 crc kubenswrapper[4822]: I0224 09:13:37.929132 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 24 09:13:37 crc kubenswrapper[4822]: I0224 09:13:37.984372 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 24 09:13:38 crc kubenswrapper[4822]: I0224 09:13:38.065584 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 24 09:13:38 crc kubenswrapper[4822]: I0224 09:13:38.123724 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 24 09:13:38 crc kubenswrapper[4822]: I0224 09:13:38.142659 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 24 09:13:38 crc kubenswrapper[4822]: I0224 09:13:38.198892 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 24 09:13:38 crc kubenswrapper[4822]: I0224 09:13:38.322317 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 24 09:13:38 crc kubenswrapper[4822]: I0224 09:13:38.427347 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 24 09:13:38 crc kubenswrapper[4822]: I0224 09:13:38.484965 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 24 09:13:38 crc kubenswrapper[4822]: I0224 09:13:38.530465 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 24 09:13:38 crc kubenswrapper[4822]: I0224 09:13:38.576685 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 24 09:13:38 crc kubenswrapper[4822]: I0224 09:13:38.623543 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 24 09:13:38 crc kubenswrapper[4822]: I0224 09:13:38.652133 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 24 09:13:38 crc kubenswrapper[4822]: I0224 09:13:38.757681 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 24 09:13:38 crc kubenswrapper[4822]: I0224 09:13:38.859392 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 24 09:13:39 crc kubenswrapper[4822]: I0224 09:13:39.216936 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 09:13:39 crc kubenswrapper[4822]: I0224 09:13:39.346770 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 24 09:13:39 crc kubenswrapper[4822]: I0224 09:13:39.374996 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 24 09:13:39 crc kubenswrapper[4822]: I0224 09:13:39.538049 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 24 09:13:39 crc kubenswrapper[4822]: I0224 09:13:39.540247 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 24 09:13:39 crc kubenswrapper[4822]: I0224 09:13:39.803139 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 24 09:13:39 crc kubenswrapper[4822]: I0224 09:13:39.837854 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 24 09:13:39 crc kubenswrapper[4822]: I0224 09:13:39.962647 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 24 09:13:40 crc kubenswrapper[4822]: I0224 09:13:40.036006 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 24 09:13:40 crc kubenswrapper[4822]: I0224 09:13:40.139220 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 24 09:13:40 crc kubenswrapper[4822]: I0224 09:13:40.374560 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 24 09:13:40 crc kubenswrapper[4822]: I0224 09:13:40.547024 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 24 09:13:40 crc kubenswrapper[4822]: I0224 09:13:40.622375 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 24 09:13:40 crc kubenswrapper[4822]: I0224 09:13:40.817926 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 24 09:13:41 crc kubenswrapper[4822]: I0224 09:13:41.239187 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 24 09:13:41 crc kubenswrapper[4822]: I0224 09:13:41.692455 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 24 09:13:41 crc kubenswrapper[4822]: I0224 09:13:41.965697 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 24 09:13:41 crc kubenswrapper[4822]: I0224 09:13:41.966370 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 09:13:42 crc kubenswrapper[4822]: I0224 09:13:42.010105 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 24 09:13:42 crc kubenswrapper[4822]: I0224 09:13:42.022282 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 24 09:13:42 crc kubenswrapper[4822]: I0224 09:13:42.022350 4822 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="c2dd4d767c2650ac8c541b45e13509661385057ef7e3e4ccf9542188a0fb437c" exitCode=137 Feb 24 09:13:42 crc kubenswrapper[4822]: I0224 09:13:42.022405 4822 scope.go:117] "RemoveContainer" containerID="c2dd4d767c2650ac8c541b45e13509661385057ef7e3e4ccf9542188a0fb437c" Feb 24 09:13:42 crc kubenswrapper[4822]: I0224 09:13:42.022485 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 09:13:42 crc kubenswrapper[4822]: I0224 09:13:42.053486 4822 scope.go:117] "RemoveContainer" containerID="c2dd4d767c2650ac8c541b45e13509661385057ef7e3e4ccf9542188a0fb437c" Feb 24 09:13:42 crc kubenswrapper[4822]: E0224 09:13:42.054190 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2dd4d767c2650ac8c541b45e13509661385057ef7e3e4ccf9542188a0fb437c\": container with ID starting with c2dd4d767c2650ac8c541b45e13509661385057ef7e3e4ccf9542188a0fb437c not found: ID does not exist" containerID="c2dd4d767c2650ac8c541b45e13509661385057ef7e3e4ccf9542188a0fb437c" Feb 24 09:13:42 crc kubenswrapper[4822]: I0224 09:13:42.054250 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2dd4d767c2650ac8c541b45e13509661385057ef7e3e4ccf9542188a0fb437c"} err="failed to get container status \"c2dd4d767c2650ac8c541b45e13509661385057ef7e3e4ccf9542188a0fb437c\": rpc error: code = NotFound desc = could not find container \"c2dd4d767c2650ac8c541b45e13509661385057ef7e3e4ccf9542188a0fb437c\": container with ID starting with c2dd4d767c2650ac8c541b45e13509661385057ef7e3e4ccf9542188a0fb437c not found: ID does not exist" Feb 24 09:13:42 crc kubenswrapper[4822]: I0224 09:13:42.121613 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 24 09:13:42 crc kubenswrapper[4822]: I0224 09:13:42.121755 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 24 09:13:42 crc kubenswrapper[4822]: I0224 09:13:42.121808 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 24 09:13:42 crc kubenswrapper[4822]: I0224 09:13:42.121859 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 24 09:13:42 crc kubenswrapper[4822]: I0224 09:13:42.121893 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 24 09:13:42 crc kubenswrapper[4822]: I0224 09:13:42.121902 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:13:42 crc kubenswrapper[4822]: I0224 09:13:42.121984 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:13:42 crc kubenswrapper[4822]: I0224 09:13:42.122091 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:13:42 crc kubenswrapper[4822]: I0224 09:13:42.122294 4822 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 24 09:13:42 crc kubenswrapper[4822]: I0224 09:13:42.122317 4822 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 24 09:13:42 crc kubenswrapper[4822]: I0224 09:13:42.122339 4822 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 24 09:13:42 crc kubenswrapper[4822]: I0224 09:13:42.123280 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:13:42 crc kubenswrapper[4822]: I0224 09:13:42.133655 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:13:42 crc kubenswrapper[4822]: I0224 09:13:42.223723 4822 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 24 09:13:42 crc kubenswrapper[4822]: I0224 09:13:42.223772 4822 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 24 09:13:42 crc kubenswrapper[4822]: I0224 09:13:42.348074 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 24 09:13:42 crc kubenswrapper[4822]: I0224 09:13:42.604801 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 24 09:13:42 crc kubenswrapper[4822]: I0224 09:13:42.829895 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 24 09:13:42 crc kubenswrapper[4822]: I0224 09:13:42.885501 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 24 09:13:42 crc kubenswrapper[4822]: I0224 09:13:42.955462 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 24 09:13:43 crc kubenswrapper[4822]: I0224 09:13:43.018938 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 24 09:13:43 crc kubenswrapper[4822]: I0224 09:13:43.079501 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 24 09:13:43 crc kubenswrapper[4822]: I0224 09:13:43.476046 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 24 09:13:51 crc kubenswrapper[4822]: I0224 09:13:51.681059 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35670: no serving certificate available for the kubelet" Feb 24 09:14:21 crc kubenswrapper[4822]: I0224 09:14:21.497143 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-rdwk8"] Feb 24 09:14:21 crc kubenswrapper[4822]: E0224 09:14:21.497970 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 24 09:14:21 crc kubenswrapper[4822]: I0224 09:14:21.497985 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 24 09:14:21 crc kubenswrapper[4822]: E0224 09:14:21.497996 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fe659d3-9cdb-43c9-8ce6-b5be02903c5e" containerName="installer" Feb 24 09:14:21 crc kubenswrapper[4822]: I0224 09:14:21.498003 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fe659d3-9cdb-43c9-8ce6-b5be02903c5e" containerName="installer" Feb 24 09:14:21 crc kubenswrapper[4822]: I0224 09:14:21.498118 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 24 09:14:21 crc kubenswrapper[4822]: I0224 09:14:21.498134 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fe659d3-9cdb-43c9-8ce6-b5be02903c5e" containerName="installer" Feb 24 09:14:21 crc kubenswrapper[4822]: I0224 09:14:21.498545 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-rdwk8" Feb 24 09:14:21 crc kubenswrapper[4822]: I0224 09:14:21.514840 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-rdwk8"] Feb 24 09:14:21 crc kubenswrapper[4822]: I0224 09:14:21.669289 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d3893d59-b2ef-42c4-8d36-fab79544662c-registry-certificates\") pod \"image-registry-66df7c8f76-rdwk8\" (UID: \"d3893d59-b2ef-42c4-8d36-fab79544662c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rdwk8" Feb 24 09:14:21 crc kubenswrapper[4822]: I0224 09:14:21.669352 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-rdwk8\" (UID: \"d3893d59-b2ef-42c4-8d36-fab79544662c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rdwk8" Feb 24 09:14:21 crc kubenswrapper[4822]: I0224 09:14:21.669403 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d3893d59-b2ef-42c4-8d36-fab79544662c-bound-sa-token\") pod \"image-registry-66df7c8f76-rdwk8\" (UID: \"d3893d59-b2ef-42c4-8d36-fab79544662c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rdwk8" Feb 24 09:14:21 crc kubenswrapper[4822]: I0224 09:14:21.669584 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3893d59-b2ef-42c4-8d36-fab79544662c-trusted-ca\") pod \"image-registry-66df7c8f76-rdwk8\" (UID: \"d3893d59-b2ef-42c4-8d36-fab79544662c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rdwk8" Feb 24 09:14:21 crc kubenswrapper[4822]: I0224 09:14:21.669608 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d3893d59-b2ef-42c4-8d36-fab79544662c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-rdwk8\" (UID: \"d3893d59-b2ef-42c4-8d36-fab79544662c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rdwk8" Feb 24 09:14:21 crc kubenswrapper[4822]: I0224 09:14:21.669631 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d3893d59-b2ef-42c4-8d36-fab79544662c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-rdwk8\" (UID: \"d3893d59-b2ef-42c4-8d36-fab79544662c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rdwk8" Feb 24 09:14:21 crc kubenswrapper[4822]: I0224 09:14:21.669649 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d3893d59-b2ef-42c4-8d36-fab79544662c-registry-tls\") pod \"image-registry-66df7c8f76-rdwk8\" (UID: \"d3893d59-b2ef-42c4-8d36-fab79544662c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rdwk8" Feb 24 09:14:21 crc kubenswrapper[4822]: I0224 09:14:21.669666 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jmvz\" (UniqueName: \"kubernetes.io/projected/d3893d59-b2ef-42c4-8d36-fab79544662c-kube-api-access-6jmvz\") pod \"image-registry-66df7c8f76-rdwk8\" (UID: \"d3893d59-b2ef-42c4-8d36-fab79544662c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rdwk8" Feb 24 09:14:21 crc kubenswrapper[4822]: I0224 09:14:21.710123 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-rdwk8\" (UID: \"d3893d59-b2ef-42c4-8d36-fab79544662c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rdwk8" Feb 24 09:14:21 crc kubenswrapper[4822]: I0224 09:14:21.770818 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3893d59-b2ef-42c4-8d36-fab79544662c-trusted-ca\") pod \"image-registry-66df7c8f76-rdwk8\" (UID: \"d3893d59-b2ef-42c4-8d36-fab79544662c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rdwk8" Feb 24 09:14:21 crc kubenswrapper[4822]: I0224 09:14:21.771203 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d3893d59-b2ef-42c4-8d36-fab79544662c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-rdwk8\" (UID: \"d3893d59-b2ef-42c4-8d36-fab79544662c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rdwk8" Feb 24 09:14:21 crc kubenswrapper[4822]: I0224 09:14:21.771352 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d3893d59-b2ef-42c4-8d36-fab79544662c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-rdwk8\" (UID: \"d3893d59-b2ef-42c4-8d36-fab79544662c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rdwk8" Feb 24 09:14:21 crc kubenswrapper[4822]: I0224 09:14:21.771490 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d3893d59-b2ef-42c4-8d36-fab79544662c-registry-tls\") pod \"image-registry-66df7c8f76-rdwk8\" (UID: \"d3893d59-b2ef-42c4-8d36-fab79544662c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rdwk8" Feb 24 09:14:21 crc kubenswrapper[4822]: I0224 09:14:21.772639 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jmvz\" (UniqueName: \"kubernetes.io/projected/d3893d59-b2ef-42c4-8d36-fab79544662c-kube-api-access-6jmvz\") pod \"image-registry-66df7c8f76-rdwk8\" (UID: \"d3893d59-b2ef-42c4-8d36-fab79544662c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rdwk8" Feb 24 09:14:21 crc kubenswrapper[4822]: I0224 09:14:21.772796 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d3893d59-b2ef-42c4-8d36-fab79544662c-registry-certificates\") pod \"image-registry-66df7c8f76-rdwk8\" (UID: \"d3893d59-b2ef-42c4-8d36-fab79544662c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rdwk8" Feb 24 09:14:21 crc kubenswrapper[4822]: I0224 09:14:21.773033 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d3893d59-b2ef-42c4-8d36-fab79544662c-bound-sa-token\") pod \"image-registry-66df7c8f76-rdwk8\" (UID: \"d3893d59-b2ef-42c4-8d36-fab79544662c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rdwk8" Feb 24 09:14:21 crc kubenswrapper[4822]: I0224 09:14:21.773048 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3893d59-b2ef-42c4-8d36-fab79544662c-trusted-ca\") pod \"image-registry-66df7c8f76-rdwk8\" (UID: \"d3893d59-b2ef-42c4-8d36-fab79544662c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rdwk8" Feb 24 09:14:21 crc kubenswrapper[4822]: I0224 09:14:21.771706 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d3893d59-b2ef-42c4-8d36-fab79544662c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-rdwk8\" (UID: \"d3893d59-b2ef-42c4-8d36-fab79544662c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rdwk8" Feb 24 09:14:21 crc kubenswrapper[4822]: I0224 09:14:21.774603 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d3893d59-b2ef-42c4-8d36-fab79544662c-registry-certificates\") pod \"image-registry-66df7c8f76-rdwk8\" (UID: \"d3893d59-b2ef-42c4-8d36-fab79544662c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rdwk8" Feb 24 09:14:21 crc kubenswrapper[4822]: I0224 09:14:21.780936 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d3893d59-b2ef-42c4-8d36-fab79544662c-registry-tls\") pod \"image-registry-66df7c8f76-rdwk8\" (UID: \"d3893d59-b2ef-42c4-8d36-fab79544662c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rdwk8" Feb 24 09:14:21 crc kubenswrapper[4822]: I0224 09:14:21.781318 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d3893d59-b2ef-42c4-8d36-fab79544662c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-rdwk8\" (UID: \"d3893d59-b2ef-42c4-8d36-fab79544662c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rdwk8" Feb 24 09:14:21 crc kubenswrapper[4822]: I0224 09:14:21.793594 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d3893d59-b2ef-42c4-8d36-fab79544662c-bound-sa-token\") pod \"image-registry-66df7c8f76-rdwk8\" (UID: \"d3893d59-b2ef-42c4-8d36-fab79544662c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rdwk8" Feb 24 09:14:21 crc kubenswrapper[4822]: I0224 09:14:21.805178 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jmvz\" (UniqueName: \"kubernetes.io/projected/d3893d59-b2ef-42c4-8d36-fab79544662c-kube-api-access-6jmvz\") pod \"image-registry-66df7c8f76-rdwk8\" (UID: \"d3893d59-b2ef-42c4-8d36-fab79544662c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rdwk8" Feb 24 09:14:21 crc kubenswrapper[4822]: I0224 09:14:21.818613 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-rdwk8" Feb 24 09:14:22 crc kubenswrapper[4822]: I0224 09:14:22.044332 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-rdwk8"] Feb 24 09:14:22 crc kubenswrapper[4822]: I0224 09:14:22.303151 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-rdwk8" event={"ID":"d3893d59-b2ef-42c4-8d36-fab79544662c","Type":"ContainerStarted","Data":"75906fc0d534c7d16d39cf07c5c396922c984779cda2dff308605c31f6564751"} Feb 24 09:14:22 crc kubenswrapper[4822]: I0224 09:14:22.303407 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-rdwk8" event={"ID":"d3893d59-b2ef-42c4-8d36-fab79544662c","Type":"ContainerStarted","Data":"420ef5bc96c2a42a1dfcb7fc36404780fe92a0341e590975330a9bef58e68c1c"} Feb 24 09:14:22 crc kubenswrapper[4822]: I0224 09:14:22.303614 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-rdwk8" Feb 24 09:14:22 crc kubenswrapper[4822]: I0224 09:14:22.321757 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-rdwk8" podStartSLOduration=1.32174162 podStartE2EDuration="1.32174162s" podCreationTimestamp="2026-02-24 09:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:14:22.319416516 +0000 UTC m=+384.707179064" watchObservedRunningTime="2026-02-24 09:14:22.32174162 +0000 UTC m=+384.709504168" Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.161684 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fsxsb"] Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.162860 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fsxsb" podUID="4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9" containerName="registry-server" containerID="cri-o://95cc63c47f1e01dfb821cc2d9b933533d01d42ba44eb1fe3d6b184fc824bbec5" gracePeriod=30 Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.198773 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7wplj"] Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.198997 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7wplj" podUID="f5e41e0d-dd96-43df-94f6-f004923b10a3" containerName="registry-server" containerID="cri-o://fda0b19661a99d69d7e1b550dc0c4960c859b87dd8ca7a416bb03f5489e23e6c" gracePeriod=30 Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.233598 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jr8gq"] Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.234191 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jm48n"] Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.234211 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gvdqb"] Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.234952 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gvdqb" Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.235407 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-jr8gq" podUID="d5a6bd57-4bb7-45b4-8451-27e28ee580a5" containerName="marketplace-operator" containerID="cri-o://348127274f5e81e71b3ee82ea325fe3bb17b1704b3fa4e97752d32caeb9cad90" gracePeriod=30 Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.235693 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jm48n" podUID="b90902ec-35f8-4f8e-8d81-b813f439629c" containerName="registry-server" containerID="cri-o://1999590c55924a6d9378a9af61f93fc2dc16676f6ea23f3a0d3be281096b8882" gracePeriod=30 Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.265133 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xhmth"] Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.265629 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xhmth" podUID="70973b60-6421-4c72-b5ba-b5ad78d060e7" containerName="registry-server" containerID="cri-o://92f58103c2a6063ca065bff30dadef2e9d3892ff8a6581516e53abc148bd6c89" gracePeriod=30 Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.293332 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gvdqb"] Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.322397 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c4aaaab-f99a-43fa-8815-633397552cf0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gvdqb\" (UID: \"9c4aaaab-f99a-43fa-8815-633397552cf0\") " pod="openshift-marketplace/marketplace-operator-79b997595-gvdqb" Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.322459 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mshp\" (UniqueName: \"kubernetes.io/projected/9c4aaaab-f99a-43fa-8815-633397552cf0-kube-api-access-5mshp\") pod \"marketplace-operator-79b997595-gvdqb\" (UID: \"9c4aaaab-f99a-43fa-8815-633397552cf0\") " pod="openshift-marketplace/marketplace-operator-79b997595-gvdqb" Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.322501 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9c4aaaab-f99a-43fa-8815-633397552cf0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gvdqb\" (UID: \"9c4aaaab-f99a-43fa-8815-633397552cf0\") " pod="openshift-marketplace/marketplace-operator-79b997595-gvdqb" Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.325711 4822 generic.go:334] "Generic (PLEG): container finished" podID="4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9" containerID="95cc63c47f1e01dfb821cc2d9b933533d01d42ba44eb1fe3d6b184fc824bbec5" exitCode=0 Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.325751 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsxsb" event={"ID":"4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9","Type":"ContainerDied","Data":"95cc63c47f1e01dfb821cc2d9b933533d01d42ba44eb1fe3d6b184fc824bbec5"} Feb 24 09:14:25 crc kubenswrapper[4822]: E0224 09:14:25.357061 4822 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 92f58103c2a6063ca065bff30dadef2e9d3892ff8a6581516e53abc148bd6c89 is running failed: container process not found" containerID="92f58103c2a6063ca065bff30dadef2e9d3892ff8a6581516e53abc148bd6c89" cmd=["grpc_health_probe","-addr=:50051"] Feb 24 09:14:25 crc kubenswrapper[4822]: E0224 09:14:25.358448 4822 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 92f58103c2a6063ca065bff30dadef2e9d3892ff8a6581516e53abc148bd6c89 is running failed: container process not found" containerID="92f58103c2a6063ca065bff30dadef2e9d3892ff8a6581516e53abc148bd6c89" cmd=["grpc_health_probe","-addr=:50051"] Feb 24 09:14:25 crc kubenswrapper[4822]: E0224 09:14:25.359220 4822 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 92f58103c2a6063ca065bff30dadef2e9d3892ff8a6581516e53abc148bd6c89 is running failed: container process not found" containerID="92f58103c2a6063ca065bff30dadef2e9d3892ff8a6581516e53abc148bd6c89" cmd=["grpc_health_probe","-addr=:50051"] Feb 24 09:14:25 crc kubenswrapper[4822]: E0224 09:14:25.359251 4822 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 92f58103c2a6063ca065bff30dadef2e9d3892ff8a6581516e53abc148bd6c89 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-xhmth" podUID="70973b60-6421-4c72-b5ba-b5ad78d060e7" containerName="registry-server" Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.424060 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c4aaaab-f99a-43fa-8815-633397552cf0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gvdqb\" (UID: \"9c4aaaab-f99a-43fa-8815-633397552cf0\") " pod="openshift-marketplace/marketplace-operator-79b997595-gvdqb" Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.424124 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mshp\" (UniqueName: \"kubernetes.io/projected/9c4aaaab-f99a-43fa-8815-633397552cf0-kube-api-access-5mshp\") pod \"marketplace-operator-79b997595-gvdqb\" (UID: \"9c4aaaab-f99a-43fa-8815-633397552cf0\") " pod="openshift-marketplace/marketplace-operator-79b997595-gvdqb" Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.424162 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9c4aaaab-f99a-43fa-8815-633397552cf0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gvdqb\" (UID: \"9c4aaaab-f99a-43fa-8815-633397552cf0\") " pod="openshift-marketplace/marketplace-operator-79b997595-gvdqb" Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.425569 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c4aaaab-f99a-43fa-8815-633397552cf0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gvdqb\" (UID: \"9c4aaaab-f99a-43fa-8815-633397552cf0\") " pod="openshift-marketplace/marketplace-operator-79b997595-gvdqb" Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.430845 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9c4aaaab-f99a-43fa-8815-633397552cf0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gvdqb\" (UID: \"9c4aaaab-f99a-43fa-8815-633397552cf0\") " pod="openshift-marketplace/marketplace-operator-79b997595-gvdqb" Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.442084 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mshp\" (UniqueName: \"kubernetes.io/projected/9c4aaaab-f99a-43fa-8815-633397552cf0-kube-api-access-5mshp\") pod \"marketplace-operator-79b997595-gvdqb\" (UID: \"9c4aaaab-f99a-43fa-8815-633397552cf0\") " pod="openshift-marketplace/marketplace-operator-79b997595-gvdqb" Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.650674 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gvdqb" Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.659379 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsxsb" Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.787928 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xhmth" Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.790169 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jm48n" Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.790194 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wplj" Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.792997 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jr8gq" Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.830621 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlvlv\" (UniqueName: \"kubernetes.io/projected/4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9-kube-api-access-qlvlv\") pod \"4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9\" (UID: \"4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9\") " Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.830685 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9-utilities\") pod \"4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9\" (UID: \"4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9\") " Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.830851 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9-catalog-content\") pod \"4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9\" (UID: \"4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9\") " Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.831493 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9-utilities" (OuterVolumeSpecName: "utilities") pod "4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9" (UID: "4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.834383 4822 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.835429 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9-kube-api-access-qlvlv" (OuterVolumeSpecName: "kube-api-access-qlvlv") pod "4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9" (UID: "4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9"). InnerVolumeSpecName "kube-api-access-qlvlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.912139 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9" (UID: "4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.935241 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70973b60-6421-4c72-b5ba-b5ad78d060e7-catalog-content\") pod \"70973b60-6421-4c72-b5ba-b5ad78d060e7\" (UID: \"70973b60-6421-4c72-b5ba-b5ad78d060e7\") " Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.935284 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d5a6bd57-4bb7-45b4-8451-27e28ee580a5-marketplace-operator-metrics\") pod \"d5a6bd57-4bb7-45b4-8451-27e28ee580a5\" (UID: \"d5a6bd57-4bb7-45b4-8451-27e28ee580a5\") " Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.935305 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtqsv\" (UniqueName: \"kubernetes.io/projected/f5e41e0d-dd96-43df-94f6-f004923b10a3-kube-api-access-gtqsv\") pod \"f5e41e0d-dd96-43df-94f6-f004923b10a3\" (UID: \"f5e41e0d-dd96-43df-94f6-f004923b10a3\") " Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.935324 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b90902ec-35f8-4f8e-8d81-b813f439629c-catalog-content\") pod \"b90902ec-35f8-4f8e-8d81-b813f439629c\" (UID: \"b90902ec-35f8-4f8e-8d81-b813f439629c\") " Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.935342 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrj2m\" (UniqueName: \"kubernetes.io/projected/d5a6bd57-4bb7-45b4-8451-27e28ee580a5-kube-api-access-lrj2m\") pod \"d5a6bd57-4bb7-45b4-8451-27e28ee580a5\" (UID: \"d5a6bd57-4bb7-45b4-8451-27e28ee580a5\") " Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.935359 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khgqc\" (UniqueName: \"kubernetes.io/projected/b90902ec-35f8-4f8e-8d81-b813f439629c-kube-api-access-khgqc\") pod \"b90902ec-35f8-4f8e-8d81-b813f439629c\" (UID: \"b90902ec-35f8-4f8e-8d81-b813f439629c\") " Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.935375 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70973b60-6421-4c72-b5ba-b5ad78d060e7-utilities\") pod \"70973b60-6421-4c72-b5ba-b5ad78d060e7\" (UID: \"70973b60-6421-4c72-b5ba-b5ad78d060e7\") " Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.935415 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5e41e0d-dd96-43df-94f6-f004923b10a3-catalog-content\") pod \"f5e41e0d-dd96-43df-94f6-f004923b10a3\" (UID: \"f5e41e0d-dd96-43df-94f6-f004923b10a3\") " Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.935459 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzg5v\" (UniqueName: \"kubernetes.io/projected/70973b60-6421-4c72-b5ba-b5ad78d060e7-kube-api-access-mzg5v\") pod \"70973b60-6421-4c72-b5ba-b5ad78d060e7\" (UID: \"70973b60-6421-4c72-b5ba-b5ad78d060e7\") " Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.935501 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5e41e0d-dd96-43df-94f6-f004923b10a3-utilities\") pod \"f5e41e0d-dd96-43df-94f6-f004923b10a3\" (UID: \"f5e41e0d-dd96-43df-94f6-f004923b10a3\") " Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.935519 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d5a6bd57-4bb7-45b4-8451-27e28ee580a5-marketplace-trusted-ca\") pod \"d5a6bd57-4bb7-45b4-8451-27e28ee580a5\" (UID: \"d5a6bd57-4bb7-45b4-8451-27e28ee580a5\") " Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.935534 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b90902ec-35f8-4f8e-8d81-b813f439629c-utilities\") pod \"b90902ec-35f8-4f8e-8d81-b813f439629c\" (UID: \"b90902ec-35f8-4f8e-8d81-b813f439629c\") " Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.935718 4822 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.935732 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlvlv\" (UniqueName: \"kubernetes.io/projected/4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9-kube-api-access-qlvlv\") on node \"crc\" DevicePath \"\"" Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.936687 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b90902ec-35f8-4f8e-8d81-b813f439629c-utilities" (OuterVolumeSpecName: "utilities") pod "b90902ec-35f8-4f8e-8d81-b813f439629c" (UID: "b90902ec-35f8-4f8e-8d81-b813f439629c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.936741 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70973b60-6421-4c72-b5ba-b5ad78d060e7-utilities" (OuterVolumeSpecName: "utilities") pod "70973b60-6421-4c72-b5ba-b5ad78d060e7" (UID: "70973b60-6421-4c72-b5ba-b5ad78d060e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.937130 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5e41e0d-dd96-43df-94f6-f004923b10a3-utilities" (OuterVolumeSpecName: "utilities") pod "f5e41e0d-dd96-43df-94f6-f004923b10a3" (UID: "f5e41e0d-dd96-43df-94f6-f004923b10a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.938653 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70973b60-6421-4c72-b5ba-b5ad78d060e7-kube-api-access-mzg5v" (OuterVolumeSpecName: "kube-api-access-mzg5v") pod "70973b60-6421-4c72-b5ba-b5ad78d060e7" (UID: "70973b60-6421-4c72-b5ba-b5ad78d060e7"). InnerVolumeSpecName "kube-api-access-mzg5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.939872 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5e41e0d-dd96-43df-94f6-f004923b10a3-kube-api-access-gtqsv" (OuterVolumeSpecName: "kube-api-access-gtqsv") pod "f5e41e0d-dd96-43df-94f6-f004923b10a3" (UID: "f5e41e0d-dd96-43df-94f6-f004923b10a3"). InnerVolumeSpecName "kube-api-access-gtqsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.940673 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5a6bd57-4bb7-45b4-8451-27e28ee580a5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "d5a6bd57-4bb7-45b4-8451-27e28ee580a5" (UID: "d5a6bd57-4bb7-45b4-8451-27e28ee580a5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.942045 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5a6bd57-4bb7-45b4-8451-27e28ee580a5-kube-api-access-lrj2m" (OuterVolumeSpecName: "kube-api-access-lrj2m") pod "d5a6bd57-4bb7-45b4-8451-27e28ee580a5" (UID: "d5a6bd57-4bb7-45b4-8451-27e28ee580a5"). InnerVolumeSpecName "kube-api-access-lrj2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.948117 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5a6bd57-4bb7-45b4-8451-27e28ee580a5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "d5a6bd57-4bb7-45b4-8451-27e28ee580a5" (UID: "d5a6bd57-4bb7-45b4-8451-27e28ee580a5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.954809 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b90902ec-35f8-4f8e-8d81-b813f439629c-kube-api-access-khgqc" (OuterVolumeSpecName: "kube-api-access-khgqc") pod "b90902ec-35f8-4f8e-8d81-b813f439629c" (UID: "b90902ec-35f8-4f8e-8d81-b813f439629c"). InnerVolumeSpecName "kube-api-access-khgqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.968961 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b90902ec-35f8-4f8e-8d81-b813f439629c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b90902ec-35f8-4f8e-8d81-b813f439629c" (UID: "b90902ec-35f8-4f8e-8d81-b813f439629c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:14:25 crc kubenswrapper[4822]: I0224 09:14:25.998301 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5e41e0d-dd96-43df-94f6-f004923b10a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5e41e0d-dd96-43df-94f6-f004923b10a3" (UID: "f5e41e0d-dd96-43df-94f6-f004923b10a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.036721 4822 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5e41e0d-dd96-43df-94f6-f004923b10a3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.036754 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzg5v\" (UniqueName: \"kubernetes.io/projected/70973b60-6421-4c72-b5ba-b5ad78d060e7-kube-api-access-mzg5v\") on node \"crc\" DevicePath \"\"" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.036768 4822 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5e41e0d-dd96-43df-94f6-f004923b10a3-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.036776 4822 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b90902ec-35f8-4f8e-8d81-b813f439629c-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.036786 4822 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d5a6bd57-4bb7-45b4-8451-27e28ee580a5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.036795 4822 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d5a6bd57-4bb7-45b4-8451-27e28ee580a5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.036805 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtqsv\" (UniqueName: \"kubernetes.io/projected/f5e41e0d-dd96-43df-94f6-f004923b10a3-kube-api-access-gtqsv\") on node \"crc\" DevicePath \"\"" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.036814 4822 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b90902ec-35f8-4f8e-8d81-b813f439629c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.036825 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrj2m\" (UniqueName: \"kubernetes.io/projected/d5a6bd57-4bb7-45b4-8451-27e28ee580a5-kube-api-access-lrj2m\") on node \"crc\" DevicePath \"\"" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.036835 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khgqc\" (UniqueName: \"kubernetes.io/projected/b90902ec-35f8-4f8e-8d81-b813f439629c-kube-api-access-khgqc\") on node \"crc\" DevicePath \"\"" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.036843 4822 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70973b60-6421-4c72-b5ba-b5ad78d060e7-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.073563 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70973b60-6421-4c72-b5ba-b5ad78d060e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "70973b60-6421-4c72-b5ba-b5ad78d060e7" (UID: "70973b60-6421-4c72-b5ba-b5ad78d060e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.104734 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gvdqb"] Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.137550 4822 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70973b60-6421-4c72-b5ba-b5ad78d060e7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.340621 4822 generic.go:334] "Generic (PLEG): container finished" podID="70973b60-6421-4c72-b5ba-b5ad78d060e7" containerID="92f58103c2a6063ca065bff30dadef2e9d3892ff8a6581516e53abc148bd6c89" exitCode=0 Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.340712 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xhmth" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.343518 4822 generic.go:334] "Generic (PLEG): container finished" podID="b90902ec-35f8-4f8e-8d81-b813f439629c" containerID="1999590c55924a6d9378a9af61f93fc2dc16676f6ea23f3a0d3be281096b8882" exitCode=0 Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.343648 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jm48n" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.343871 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xhmth" event={"ID":"70973b60-6421-4c72-b5ba-b5ad78d060e7","Type":"ContainerDied","Data":"92f58103c2a6063ca065bff30dadef2e9d3892ff8a6581516e53abc148bd6c89"} Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.343926 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xhmth" event={"ID":"70973b60-6421-4c72-b5ba-b5ad78d060e7","Type":"ContainerDied","Data":"0d29a7d1df3799b8749667020819697f078fb67bfed92f865a2f930ede8d15b8"} Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.343943 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jm48n" event={"ID":"b90902ec-35f8-4f8e-8d81-b813f439629c","Type":"ContainerDied","Data":"1999590c55924a6d9378a9af61f93fc2dc16676f6ea23f3a0d3be281096b8882"} Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.343958 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jm48n" event={"ID":"b90902ec-35f8-4f8e-8d81-b813f439629c","Type":"ContainerDied","Data":"2df00c05c8d084121d723871c1cea3df6fda115bbf8e315070ce0659c03ac208"} Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.343982 4822 scope.go:117] "RemoveContainer" containerID="92f58103c2a6063ca065bff30dadef2e9d3892ff8a6581516e53abc148bd6c89" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.353992 4822 generic.go:334] "Generic (PLEG): container finished" podID="d5a6bd57-4bb7-45b4-8451-27e28ee580a5" containerID="348127274f5e81e71b3ee82ea325fe3bb17b1704b3fa4e97752d32caeb9cad90" exitCode=0 Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.354032 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jr8gq" event={"ID":"d5a6bd57-4bb7-45b4-8451-27e28ee580a5","Type":"ContainerDied","Data":"348127274f5e81e71b3ee82ea325fe3bb17b1704b3fa4e97752d32caeb9cad90"} Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.354084 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jr8gq" event={"ID":"d5a6bd57-4bb7-45b4-8451-27e28ee580a5","Type":"ContainerDied","Data":"569c36113f3b97c4d1c01e29f8bd3d038b5c2331f67b0776d5fb1cc636785a10"} Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.354045 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jr8gq" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.359563 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsxsb" event={"ID":"4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9","Type":"ContainerDied","Data":"7e7d8bd3888ab5e4f40f5b568902a2ec44cf3eabdaa499d5a5a2559468d667df"} Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.359586 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsxsb" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.365333 4822 generic.go:334] "Generic (PLEG): container finished" podID="f5e41e0d-dd96-43df-94f6-f004923b10a3" containerID="fda0b19661a99d69d7e1b550dc0c4960c859b87dd8ca7a416bb03f5489e23e6c" exitCode=0 Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.365406 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wplj" event={"ID":"f5e41e0d-dd96-43df-94f6-f004923b10a3","Type":"ContainerDied","Data":"fda0b19661a99d69d7e1b550dc0c4960c859b87dd8ca7a416bb03f5489e23e6c"} Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.365438 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wplj" event={"ID":"f5e41e0d-dd96-43df-94f6-f004923b10a3","Type":"ContainerDied","Data":"07a5aa72101e9671bfdf4283fca9d725ae7c1abc4f8d28bc1bb258f9ea37ecea"} Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.365510 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wplj" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.369153 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gvdqb" event={"ID":"9c4aaaab-f99a-43fa-8815-633397552cf0","Type":"ContainerStarted","Data":"2ba6a55a5e9e02a5ff1c4a23d9f447dd4c5c99972ff1ed95b497943a860643e5"} Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.369177 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gvdqb" event={"ID":"9c4aaaab-f99a-43fa-8815-633397552cf0","Type":"ContainerStarted","Data":"e3f155be37038f5fc0ae8ba85d5a6539f375456a5b2ccc3453b9d8214f8546ca"} Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.369227 4822 scope.go:117] "RemoveContainer" containerID="ff6bfa4f2203786406a2a9b57b2529103b93f50d79a2b023fa710ea066df8430" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.369417 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-gvdqb" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.370577 4822 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-gvdqb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.69:8080/healthz\": dial tcp 10.217.0.69:8080: connect: connection refused" start-of-body= Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.370622 4822 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-gvdqb" podUID="9c4aaaab-f99a-43fa-8815-633397552cf0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.69:8080/healthz\": dial tcp 10.217.0.69:8080: connect: connection refused" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.391927 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-gvdqb" podStartSLOduration=1.3918952199999999 podStartE2EDuration="1.39189522s" podCreationTimestamp="2026-02-24 09:14:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:14:26.388449596 +0000 UTC m=+388.776212144" watchObservedRunningTime="2026-02-24 09:14:26.39189522 +0000 UTC m=+388.779657768" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.405251 4822 scope.go:117] "RemoveContainer" containerID="281c213af7741790e53f6121910b6386460ef9661cc7b2be2f264553f976683c" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.415955 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xhmth"] Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.420716 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xhmth"] Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.426009 4822 scope.go:117] "RemoveContainer" containerID="92f58103c2a6063ca065bff30dadef2e9d3892ff8a6581516e53abc148bd6c89" Feb 24 09:14:26 crc kubenswrapper[4822]: E0224 09:14:26.426436 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92f58103c2a6063ca065bff30dadef2e9d3892ff8a6581516e53abc148bd6c89\": container with ID starting with 92f58103c2a6063ca065bff30dadef2e9d3892ff8a6581516e53abc148bd6c89 not found: ID does not exist" containerID="92f58103c2a6063ca065bff30dadef2e9d3892ff8a6581516e53abc148bd6c89" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.426481 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92f58103c2a6063ca065bff30dadef2e9d3892ff8a6581516e53abc148bd6c89"} err="failed to get container status \"92f58103c2a6063ca065bff30dadef2e9d3892ff8a6581516e53abc148bd6c89\": rpc error: code = NotFound desc = could not find container \"92f58103c2a6063ca065bff30dadef2e9d3892ff8a6581516e53abc148bd6c89\": container with ID starting with 92f58103c2a6063ca065bff30dadef2e9d3892ff8a6581516e53abc148bd6c89 not found: ID does not exist" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.426508 4822 scope.go:117] "RemoveContainer" containerID="ff6bfa4f2203786406a2a9b57b2529103b93f50d79a2b023fa710ea066df8430" Feb 24 09:14:26 crc kubenswrapper[4822]: E0224 09:14:26.426783 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff6bfa4f2203786406a2a9b57b2529103b93f50d79a2b023fa710ea066df8430\": container with ID starting with ff6bfa4f2203786406a2a9b57b2529103b93f50d79a2b023fa710ea066df8430 not found: ID does not exist" containerID="ff6bfa4f2203786406a2a9b57b2529103b93f50d79a2b023fa710ea066df8430" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.427632 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff6bfa4f2203786406a2a9b57b2529103b93f50d79a2b023fa710ea066df8430"} err="failed to get container status \"ff6bfa4f2203786406a2a9b57b2529103b93f50d79a2b023fa710ea066df8430\": rpc error: code = NotFound desc = could not find container \"ff6bfa4f2203786406a2a9b57b2529103b93f50d79a2b023fa710ea066df8430\": container with ID starting with ff6bfa4f2203786406a2a9b57b2529103b93f50d79a2b023fa710ea066df8430 not found: ID does not exist" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.427653 4822 scope.go:117] "RemoveContainer" containerID="281c213af7741790e53f6121910b6386460ef9661cc7b2be2f264553f976683c" Feb 24 09:14:26 crc kubenswrapper[4822]: E0224 09:14:26.428847 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"281c213af7741790e53f6121910b6386460ef9661cc7b2be2f264553f976683c\": container with ID starting with 281c213af7741790e53f6121910b6386460ef9661cc7b2be2f264553f976683c not found: ID does not exist" containerID="281c213af7741790e53f6121910b6386460ef9661cc7b2be2f264553f976683c" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.428880 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"281c213af7741790e53f6121910b6386460ef9661cc7b2be2f264553f976683c"} err="failed to get container status \"281c213af7741790e53f6121910b6386460ef9661cc7b2be2f264553f976683c\": rpc error: code = NotFound desc = could not find container \"281c213af7741790e53f6121910b6386460ef9661cc7b2be2f264553f976683c\": container with ID starting with 281c213af7741790e53f6121910b6386460ef9661cc7b2be2f264553f976683c not found: ID does not exist" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.428904 4822 scope.go:117] "RemoveContainer" containerID="1999590c55924a6d9378a9af61f93fc2dc16676f6ea23f3a0d3be281096b8882" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.434616 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jm48n"] Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.447767 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jm48n"] Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.452700 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fsxsb"] Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.456634 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fsxsb"] Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.457615 4822 scope.go:117] "RemoveContainer" containerID="5b8889fbfade3969646cd51dc33c763521160c41a4e2cda470460e27f70fb961" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.460288 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jr8gq"] Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.466944 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jr8gq"] Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.468550 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7wplj"] Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.471214 4822 scope.go:117] "RemoveContainer" containerID="8ef9ac588423f97e63504702d2a2af228ecab06b86d8fee50b183bf145c10ed0" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.473620 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7wplj"] Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.486168 4822 scope.go:117] "RemoveContainer" containerID="1999590c55924a6d9378a9af61f93fc2dc16676f6ea23f3a0d3be281096b8882" Feb 24 09:14:26 crc kubenswrapper[4822]: E0224 09:14:26.486566 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1999590c55924a6d9378a9af61f93fc2dc16676f6ea23f3a0d3be281096b8882\": container with ID starting with 1999590c55924a6d9378a9af61f93fc2dc16676f6ea23f3a0d3be281096b8882 not found: ID does not exist" containerID="1999590c55924a6d9378a9af61f93fc2dc16676f6ea23f3a0d3be281096b8882" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.486589 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1999590c55924a6d9378a9af61f93fc2dc16676f6ea23f3a0d3be281096b8882"} err="failed to get container status \"1999590c55924a6d9378a9af61f93fc2dc16676f6ea23f3a0d3be281096b8882\": rpc error: code = NotFound desc = could not find container \"1999590c55924a6d9378a9af61f93fc2dc16676f6ea23f3a0d3be281096b8882\": container with ID starting with 1999590c55924a6d9378a9af61f93fc2dc16676f6ea23f3a0d3be281096b8882 not found: ID does not exist" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.486605 4822 scope.go:117] "RemoveContainer" containerID="5b8889fbfade3969646cd51dc33c763521160c41a4e2cda470460e27f70fb961" Feb 24 09:14:26 crc kubenswrapper[4822]: E0224 09:14:26.486795 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b8889fbfade3969646cd51dc33c763521160c41a4e2cda470460e27f70fb961\": container with ID starting with 5b8889fbfade3969646cd51dc33c763521160c41a4e2cda470460e27f70fb961 not found: ID does not exist" containerID="5b8889fbfade3969646cd51dc33c763521160c41a4e2cda470460e27f70fb961" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.486808 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b8889fbfade3969646cd51dc33c763521160c41a4e2cda470460e27f70fb961"} err="failed to get container status \"5b8889fbfade3969646cd51dc33c763521160c41a4e2cda470460e27f70fb961\": rpc error: code = NotFound desc = could not find container \"5b8889fbfade3969646cd51dc33c763521160c41a4e2cda470460e27f70fb961\": container with ID starting with 5b8889fbfade3969646cd51dc33c763521160c41a4e2cda470460e27f70fb961 not found: ID does not exist" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.486818 4822 scope.go:117] "RemoveContainer" containerID="8ef9ac588423f97e63504702d2a2af228ecab06b86d8fee50b183bf145c10ed0" Feb 24 09:14:26 crc kubenswrapper[4822]: E0224 09:14:26.488163 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ef9ac588423f97e63504702d2a2af228ecab06b86d8fee50b183bf145c10ed0\": container with ID starting with 8ef9ac588423f97e63504702d2a2af228ecab06b86d8fee50b183bf145c10ed0 not found: ID does not exist" containerID="8ef9ac588423f97e63504702d2a2af228ecab06b86d8fee50b183bf145c10ed0" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.488184 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ef9ac588423f97e63504702d2a2af228ecab06b86d8fee50b183bf145c10ed0"} err="failed to get container status \"8ef9ac588423f97e63504702d2a2af228ecab06b86d8fee50b183bf145c10ed0\": rpc error: code = NotFound desc = could not find container \"8ef9ac588423f97e63504702d2a2af228ecab06b86d8fee50b183bf145c10ed0\": container with ID starting with 8ef9ac588423f97e63504702d2a2af228ecab06b86d8fee50b183bf145c10ed0 not found: ID does not exist" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.488201 4822 scope.go:117] "RemoveContainer" containerID="348127274f5e81e71b3ee82ea325fe3bb17b1704b3fa4e97752d32caeb9cad90" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.500889 4822 scope.go:117] "RemoveContainer" containerID="348127274f5e81e71b3ee82ea325fe3bb17b1704b3fa4e97752d32caeb9cad90" Feb 24 09:14:26 crc kubenswrapper[4822]: E0224 09:14:26.501403 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"348127274f5e81e71b3ee82ea325fe3bb17b1704b3fa4e97752d32caeb9cad90\": container with ID starting with 348127274f5e81e71b3ee82ea325fe3bb17b1704b3fa4e97752d32caeb9cad90 not found: ID does not exist" containerID="348127274f5e81e71b3ee82ea325fe3bb17b1704b3fa4e97752d32caeb9cad90" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.501443 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"348127274f5e81e71b3ee82ea325fe3bb17b1704b3fa4e97752d32caeb9cad90"} err="failed to get container status \"348127274f5e81e71b3ee82ea325fe3bb17b1704b3fa4e97752d32caeb9cad90\": rpc error: code = NotFound desc = could not find container \"348127274f5e81e71b3ee82ea325fe3bb17b1704b3fa4e97752d32caeb9cad90\": container with ID starting with 348127274f5e81e71b3ee82ea325fe3bb17b1704b3fa4e97752d32caeb9cad90 not found: ID does not exist" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.501472 4822 scope.go:117] "RemoveContainer" containerID="95cc63c47f1e01dfb821cc2d9b933533d01d42ba44eb1fe3d6b184fc824bbec5" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.516606 4822 scope.go:117] "RemoveContainer" containerID="b9fc45af2a7140ef7e6abf7a716bcec1a332673f662044bc55b763b622db0824" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.538059 4822 scope.go:117] "RemoveContainer" containerID="ca2a7165bc38d19f21296b61dfc07b7af3e8058f7ef6cbcb9ca73f5a954d2629" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.553388 4822 scope.go:117] "RemoveContainer" containerID="fda0b19661a99d69d7e1b550dc0c4960c859b87dd8ca7a416bb03f5489e23e6c" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.569198 4822 scope.go:117] "RemoveContainer" containerID="b2e4364f059047c273e4b4e16604adaa769cd399b3e96bcecffbb31e692030ea" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.585180 4822 scope.go:117] "RemoveContainer" containerID="a8cf2767edf31b1971ac045ecdbb0eb7da8b1cc81256f80de639d4d80d367e7e" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.601699 4822 scope.go:117] "RemoveContainer" containerID="fda0b19661a99d69d7e1b550dc0c4960c859b87dd8ca7a416bb03f5489e23e6c" Feb 24 09:14:26 crc kubenswrapper[4822]: E0224 09:14:26.602408 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fda0b19661a99d69d7e1b550dc0c4960c859b87dd8ca7a416bb03f5489e23e6c\": container with ID starting with fda0b19661a99d69d7e1b550dc0c4960c859b87dd8ca7a416bb03f5489e23e6c not found: ID does not exist" containerID="fda0b19661a99d69d7e1b550dc0c4960c859b87dd8ca7a416bb03f5489e23e6c" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.602506 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fda0b19661a99d69d7e1b550dc0c4960c859b87dd8ca7a416bb03f5489e23e6c"} err="failed to get container status \"fda0b19661a99d69d7e1b550dc0c4960c859b87dd8ca7a416bb03f5489e23e6c\": rpc error: code = NotFound desc = could not find container \"fda0b19661a99d69d7e1b550dc0c4960c859b87dd8ca7a416bb03f5489e23e6c\": container with ID starting with fda0b19661a99d69d7e1b550dc0c4960c859b87dd8ca7a416bb03f5489e23e6c not found: ID does not exist" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.602592 4822 scope.go:117] "RemoveContainer" containerID="b2e4364f059047c273e4b4e16604adaa769cd399b3e96bcecffbb31e692030ea" Feb 24 09:14:26 crc kubenswrapper[4822]: E0224 09:14:26.603448 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2e4364f059047c273e4b4e16604adaa769cd399b3e96bcecffbb31e692030ea\": container with ID starting with b2e4364f059047c273e4b4e16604adaa769cd399b3e96bcecffbb31e692030ea not found: ID does not exist" containerID="b2e4364f059047c273e4b4e16604adaa769cd399b3e96bcecffbb31e692030ea" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.603522 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2e4364f059047c273e4b4e16604adaa769cd399b3e96bcecffbb31e692030ea"} err="failed to get container status \"b2e4364f059047c273e4b4e16604adaa769cd399b3e96bcecffbb31e692030ea\": rpc error: code = NotFound desc = could not find container \"b2e4364f059047c273e4b4e16604adaa769cd399b3e96bcecffbb31e692030ea\": container with ID starting with b2e4364f059047c273e4b4e16604adaa769cd399b3e96bcecffbb31e692030ea not found: ID does not exist" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.603596 4822 scope.go:117] "RemoveContainer" containerID="a8cf2767edf31b1971ac045ecdbb0eb7da8b1cc81256f80de639d4d80d367e7e" Feb 24 09:14:26 crc kubenswrapper[4822]: E0224 09:14:26.603997 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8cf2767edf31b1971ac045ecdbb0eb7da8b1cc81256f80de639d4d80d367e7e\": container with ID starting with a8cf2767edf31b1971ac045ecdbb0eb7da8b1cc81256f80de639d4d80d367e7e not found: ID does not exist" containerID="a8cf2767edf31b1971ac045ecdbb0eb7da8b1cc81256f80de639d4d80d367e7e" Feb 24 09:14:26 crc kubenswrapper[4822]: I0224 09:14:26.604072 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8cf2767edf31b1971ac045ecdbb0eb7da8b1cc81256f80de639d4d80d367e7e"} err="failed to get container status \"a8cf2767edf31b1971ac045ecdbb0eb7da8b1cc81256f80de639d4d80d367e7e\": rpc error: code = NotFound desc = could not find container \"a8cf2767edf31b1971ac045ecdbb0eb7da8b1cc81256f80de639d4d80d367e7e\": container with ID starting with a8cf2767edf31b1971ac045ecdbb0eb7da8b1cc81256f80de639d4d80d367e7e not found: ID does not exist" Feb 24 09:14:27 crc kubenswrapper[4822]: I0224 09:14:27.385463 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-gvdqb" Feb 24 09:14:28 crc kubenswrapper[4822]: I0224 09:14:28.345453 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9" path="/var/lib/kubelet/pods/4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9/volumes" Feb 24 09:14:28 crc kubenswrapper[4822]: I0224 09:14:28.346855 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70973b60-6421-4c72-b5ba-b5ad78d060e7" path="/var/lib/kubelet/pods/70973b60-6421-4c72-b5ba-b5ad78d060e7/volumes" Feb 24 09:14:28 crc kubenswrapper[4822]: I0224 09:14:28.348057 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b90902ec-35f8-4f8e-8d81-b813f439629c" path="/var/lib/kubelet/pods/b90902ec-35f8-4f8e-8d81-b813f439629c/volumes" Feb 24 09:14:28 crc kubenswrapper[4822]: I0224 09:14:28.350103 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5a6bd57-4bb7-45b4-8451-27e28ee580a5" path="/var/lib/kubelet/pods/d5a6bd57-4bb7-45b4-8451-27e28ee580a5/volumes" Feb 24 09:14:28 crc kubenswrapper[4822]: I0224 09:14:28.351010 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5e41e0d-dd96-43df-94f6-f004923b10a3" path="/var/lib/kubelet/pods/f5e41e0d-dd96-43df-94f6-f004923b10a3/volumes" Feb 24 09:14:30 crc kubenswrapper[4822]: I0224 09:14:30.986529 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6zjmh"] Feb 24 09:14:30 crc kubenswrapper[4822]: E0224 09:14:30.987897 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9" containerName="registry-server" Feb 24 09:14:30 crc kubenswrapper[4822]: I0224 09:14:30.988073 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9" containerName="registry-server" Feb 24 09:14:30 crc kubenswrapper[4822]: E0224 09:14:30.988185 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5e41e0d-dd96-43df-94f6-f004923b10a3" containerName="registry-server" Feb 24 09:14:30 crc kubenswrapper[4822]: I0224 09:14:30.988293 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5e41e0d-dd96-43df-94f6-f004923b10a3" containerName="registry-server" Feb 24 09:14:30 crc kubenswrapper[4822]: E0224 09:14:30.988400 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b90902ec-35f8-4f8e-8d81-b813f439629c" containerName="registry-server" Feb 24 09:14:30 crc kubenswrapper[4822]: I0224 09:14:30.988509 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="b90902ec-35f8-4f8e-8d81-b813f439629c" containerName="registry-server" Feb 24 09:14:30 crc kubenswrapper[4822]: E0224 09:14:30.988622 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70973b60-6421-4c72-b5ba-b5ad78d060e7" containerName="extract-content" Feb 24 09:14:30 crc kubenswrapper[4822]: I0224 09:14:30.988758 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="70973b60-6421-4c72-b5ba-b5ad78d060e7" containerName="extract-content" Feb 24 09:14:30 crc kubenswrapper[4822]: E0224 09:14:30.988863 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70973b60-6421-4c72-b5ba-b5ad78d060e7" containerName="registry-server" Feb 24 09:14:30 crc kubenswrapper[4822]: I0224 09:14:30.988998 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="70973b60-6421-4c72-b5ba-b5ad78d060e7" containerName="registry-server" Feb 24 09:14:30 crc kubenswrapper[4822]: E0224 09:14:30.989120 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9" containerName="extract-content" Feb 24 09:14:30 crc kubenswrapper[4822]: I0224 09:14:30.989220 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9" containerName="extract-content" Feb 24 09:14:30 crc kubenswrapper[4822]: E0224 09:14:30.989324 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70973b60-6421-4c72-b5ba-b5ad78d060e7" containerName="extract-utilities" Feb 24 09:14:30 crc kubenswrapper[4822]: I0224 09:14:30.989428 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="70973b60-6421-4c72-b5ba-b5ad78d060e7" containerName="extract-utilities" Feb 24 09:14:30 crc kubenswrapper[4822]: E0224 09:14:30.989672 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5e41e0d-dd96-43df-94f6-f004923b10a3" containerName="extract-utilities" Feb 24 09:14:30 crc kubenswrapper[4822]: I0224 09:14:30.989781 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5e41e0d-dd96-43df-94f6-f004923b10a3" containerName="extract-utilities" Feb 24 09:14:30 crc kubenswrapper[4822]: E0224 09:14:30.989880 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5e41e0d-dd96-43df-94f6-f004923b10a3" containerName="extract-content" Feb 24 09:14:30 crc kubenswrapper[4822]: I0224 09:14:30.990007 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5e41e0d-dd96-43df-94f6-f004923b10a3" containerName="extract-content" Feb 24 09:14:30 crc kubenswrapper[4822]: E0224 09:14:30.990107 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5a6bd57-4bb7-45b4-8451-27e28ee580a5" containerName="marketplace-operator" Feb 24 09:14:30 crc kubenswrapper[4822]: I0224 09:14:30.990215 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5a6bd57-4bb7-45b4-8451-27e28ee580a5" containerName="marketplace-operator" Feb 24 09:14:30 crc kubenswrapper[4822]: E0224 09:14:30.990320 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b90902ec-35f8-4f8e-8d81-b813f439629c" containerName="extract-utilities" Feb 24 09:14:30 crc kubenswrapper[4822]: I0224 09:14:30.990406 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="b90902ec-35f8-4f8e-8d81-b813f439629c" containerName="extract-utilities" Feb 24 09:14:30 crc kubenswrapper[4822]: E0224 09:14:30.990512 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b90902ec-35f8-4f8e-8d81-b813f439629c" containerName="extract-content" Feb 24 09:14:30 crc kubenswrapper[4822]: I0224 09:14:30.990603 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="b90902ec-35f8-4f8e-8d81-b813f439629c" containerName="extract-content" Feb 24 09:14:30 crc kubenswrapper[4822]: E0224 09:14:30.990689 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9" containerName="extract-utilities" Feb 24 09:14:30 crc kubenswrapper[4822]: I0224 09:14:30.990785 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9" containerName="extract-utilities" Feb 24 09:14:30 crc kubenswrapper[4822]: I0224 09:14:30.991064 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="b90902ec-35f8-4f8e-8d81-b813f439629c" containerName="registry-server" Feb 24 09:14:30 crc kubenswrapper[4822]: I0224 09:14:30.991185 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5a6bd57-4bb7-45b4-8451-27e28ee580a5" containerName="marketplace-operator" Feb 24 09:14:30 crc kubenswrapper[4822]: I0224 09:14:30.991273 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e151a1a-a1ff-4b6d-80a0-0eecd0a3e0b9" containerName="registry-server" Feb 24 09:14:30 crc kubenswrapper[4822]: I0224 09:14:30.991364 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5e41e0d-dd96-43df-94f6-f004923b10a3" containerName="registry-server" Feb 24 09:14:30 crc kubenswrapper[4822]: I0224 09:14:30.991439 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="70973b60-6421-4c72-b5ba-b5ad78d060e7" containerName="registry-server" Feb 24 09:14:30 crc kubenswrapper[4822]: I0224 09:14:30.992443 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6zjmh" Feb 24 09:14:30 crc kubenswrapper[4822]: I0224 09:14:30.994766 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 24 09:14:31 crc kubenswrapper[4822]: I0224 09:14:31.006594 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6zjmh"] Feb 24 09:14:31 crc kubenswrapper[4822]: I0224 09:14:31.107657 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa5e9a4c-06a4-4346-bebf-e7fe5df6b3b4-utilities\") pod \"community-operators-6zjmh\" (UID: \"aa5e9a4c-06a4-4346-bebf-e7fe5df6b3b4\") " pod="openshift-marketplace/community-operators-6zjmh" Feb 24 09:14:31 crc kubenswrapper[4822]: I0224 09:14:31.107831 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa5e9a4c-06a4-4346-bebf-e7fe5df6b3b4-catalog-content\") pod \"community-operators-6zjmh\" (UID: \"aa5e9a4c-06a4-4346-bebf-e7fe5df6b3b4\") " pod="openshift-marketplace/community-operators-6zjmh" Feb 24 09:14:31 crc kubenswrapper[4822]: I0224 09:14:31.107888 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvjjz\" (UniqueName: \"kubernetes.io/projected/aa5e9a4c-06a4-4346-bebf-e7fe5df6b3b4-kube-api-access-tvjjz\") pod \"community-operators-6zjmh\" (UID: \"aa5e9a4c-06a4-4346-bebf-e7fe5df6b3b4\") " pod="openshift-marketplace/community-operators-6zjmh" Feb 24 09:14:31 crc kubenswrapper[4822]: I0224 09:14:31.209449 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvjjz\" (UniqueName: \"kubernetes.io/projected/aa5e9a4c-06a4-4346-bebf-e7fe5df6b3b4-kube-api-access-tvjjz\") pod \"community-operators-6zjmh\" (UID: \"aa5e9a4c-06a4-4346-bebf-e7fe5df6b3b4\") " pod="openshift-marketplace/community-operators-6zjmh" Feb 24 09:14:31 crc kubenswrapper[4822]: I0224 09:14:31.209513 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa5e9a4c-06a4-4346-bebf-e7fe5df6b3b4-utilities\") pod \"community-operators-6zjmh\" (UID: \"aa5e9a4c-06a4-4346-bebf-e7fe5df6b3b4\") " pod="openshift-marketplace/community-operators-6zjmh" Feb 24 09:14:31 crc kubenswrapper[4822]: I0224 09:14:31.209532 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa5e9a4c-06a4-4346-bebf-e7fe5df6b3b4-catalog-content\") pod \"community-operators-6zjmh\" (UID: \"aa5e9a4c-06a4-4346-bebf-e7fe5df6b3b4\") " pod="openshift-marketplace/community-operators-6zjmh" Feb 24 09:14:31 crc kubenswrapper[4822]: I0224 09:14:31.209996 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa5e9a4c-06a4-4346-bebf-e7fe5df6b3b4-catalog-content\") pod \"community-operators-6zjmh\" (UID: \"aa5e9a4c-06a4-4346-bebf-e7fe5df6b3b4\") " pod="openshift-marketplace/community-operators-6zjmh" Feb 24 09:14:31 crc kubenswrapper[4822]: I0224 09:14:31.210141 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa5e9a4c-06a4-4346-bebf-e7fe5df6b3b4-utilities\") pod \"community-operators-6zjmh\" (UID: \"aa5e9a4c-06a4-4346-bebf-e7fe5df6b3b4\") " pod="openshift-marketplace/community-operators-6zjmh" Feb 24 09:14:31 crc kubenswrapper[4822]: I0224 09:14:31.242235 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvjjz\" (UniqueName: \"kubernetes.io/projected/aa5e9a4c-06a4-4346-bebf-e7fe5df6b3b4-kube-api-access-tvjjz\") pod \"community-operators-6zjmh\" (UID: \"aa5e9a4c-06a4-4346-bebf-e7fe5df6b3b4\") " pod="openshift-marketplace/community-operators-6zjmh" Feb 24 09:14:31 crc kubenswrapper[4822]: I0224 09:14:31.322642 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6zjmh" Feb 24 09:14:31 crc kubenswrapper[4822]: I0224 09:14:31.774140 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6zjmh"] Feb 24 09:14:31 crc kubenswrapper[4822]: W0224 09:14:31.786863 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa5e9a4c_06a4_4346_bebf_e7fe5df6b3b4.slice/crio-f524a153494e8d71739b64063078dd35c4d2e6091c7d1c1f71c0cbec2002b5ae WatchSource:0}: Error finding container f524a153494e8d71739b64063078dd35c4d2e6091c7d1c1f71c0cbec2002b5ae: Status 404 returned error can't find the container with id f524a153494e8d71739b64063078dd35c4d2e6091c7d1c1f71c0cbec2002b5ae Feb 24 09:14:32 crc kubenswrapper[4822]: I0224 09:14:32.386123 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-brm9b"] Feb 24 09:14:32 crc kubenswrapper[4822]: I0224 09:14:32.387439 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-brm9b" Feb 24 09:14:32 crc kubenswrapper[4822]: I0224 09:14:32.393317 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 24 09:14:32 crc kubenswrapper[4822]: I0224 09:14:32.399626 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-brm9b"] Feb 24 09:14:32 crc kubenswrapper[4822]: I0224 09:14:32.416643 4822 generic.go:334] "Generic (PLEG): container finished" podID="aa5e9a4c-06a4-4346-bebf-e7fe5df6b3b4" containerID="7d72baa34255d6809b01045b23d2521d1d5670bd9a8b3c6dc307982a307d2b22" exitCode=0 Feb 24 09:14:32 crc kubenswrapper[4822]: I0224 09:14:32.416691 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zjmh" event={"ID":"aa5e9a4c-06a4-4346-bebf-e7fe5df6b3b4","Type":"ContainerDied","Data":"7d72baa34255d6809b01045b23d2521d1d5670bd9a8b3c6dc307982a307d2b22"} Feb 24 09:14:32 crc kubenswrapper[4822]: I0224 09:14:32.416720 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zjmh" event={"ID":"aa5e9a4c-06a4-4346-bebf-e7fe5df6b3b4","Type":"ContainerStarted","Data":"f524a153494e8d71739b64063078dd35c4d2e6091c7d1c1f71c0cbec2002b5ae"} Feb 24 09:14:32 crc kubenswrapper[4822]: I0224 09:14:32.526216 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k955d\" (UniqueName: \"kubernetes.io/projected/be77cf2b-3af6-479e-aa39-ecd15bd3af40-kube-api-access-k955d\") pod \"redhat-marketplace-brm9b\" (UID: \"be77cf2b-3af6-479e-aa39-ecd15bd3af40\") " pod="openshift-marketplace/redhat-marketplace-brm9b" Feb 24 09:14:32 crc kubenswrapper[4822]: I0224 09:14:32.526293 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be77cf2b-3af6-479e-aa39-ecd15bd3af40-utilities\") pod \"redhat-marketplace-brm9b\" (UID: \"be77cf2b-3af6-479e-aa39-ecd15bd3af40\") " pod="openshift-marketplace/redhat-marketplace-brm9b" Feb 24 09:14:32 crc kubenswrapper[4822]: I0224 09:14:32.526318 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be77cf2b-3af6-479e-aa39-ecd15bd3af40-catalog-content\") pod \"redhat-marketplace-brm9b\" (UID: \"be77cf2b-3af6-479e-aa39-ecd15bd3af40\") " pod="openshift-marketplace/redhat-marketplace-brm9b" Feb 24 09:14:32 crc kubenswrapper[4822]: I0224 09:14:32.627396 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be77cf2b-3af6-479e-aa39-ecd15bd3af40-utilities\") pod \"redhat-marketplace-brm9b\" (UID: \"be77cf2b-3af6-479e-aa39-ecd15bd3af40\") " pod="openshift-marketplace/redhat-marketplace-brm9b" Feb 24 09:14:32 crc kubenswrapper[4822]: I0224 09:14:32.627447 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be77cf2b-3af6-479e-aa39-ecd15bd3af40-catalog-content\") pod \"redhat-marketplace-brm9b\" (UID: \"be77cf2b-3af6-479e-aa39-ecd15bd3af40\") " pod="openshift-marketplace/redhat-marketplace-brm9b" Feb 24 09:14:32 crc kubenswrapper[4822]: I0224 09:14:32.627520 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k955d\" (UniqueName: \"kubernetes.io/projected/be77cf2b-3af6-479e-aa39-ecd15bd3af40-kube-api-access-k955d\") pod \"redhat-marketplace-brm9b\" (UID: \"be77cf2b-3af6-479e-aa39-ecd15bd3af40\") " pod="openshift-marketplace/redhat-marketplace-brm9b" Feb 24 09:14:32 crc kubenswrapper[4822]: I0224 09:14:32.627848 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be77cf2b-3af6-479e-aa39-ecd15bd3af40-utilities\") pod \"redhat-marketplace-brm9b\" (UID: \"be77cf2b-3af6-479e-aa39-ecd15bd3af40\") " pod="openshift-marketplace/redhat-marketplace-brm9b" Feb 24 09:14:32 crc kubenswrapper[4822]: I0224 09:14:32.627964 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be77cf2b-3af6-479e-aa39-ecd15bd3af40-catalog-content\") pod \"redhat-marketplace-brm9b\" (UID: \"be77cf2b-3af6-479e-aa39-ecd15bd3af40\") " pod="openshift-marketplace/redhat-marketplace-brm9b" Feb 24 09:14:32 crc kubenswrapper[4822]: I0224 09:14:32.646119 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k955d\" (UniqueName: \"kubernetes.io/projected/be77cf2b-3af6-479e-aa39-ecd15bd3af40-kube-api-access-k955d\") pod \"redhat-marketplace-brm9b\" (UID: \"be77cf2b-3af6-479e-aa39-ecd15bd3af40\") " pod="openshift-marketplace/redhat-marketplace-brm9b" Feb 24 09:14:32 crc kubenswrapper[4822]: I0224 09:14:32.715565 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-brm9b" Feb 24 09:14:33 crc kubenswrapper[4822]: I0224 09:14:33.134589 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-brm9b"] Feb 24 09:14:33 crc kubenswrapper[4822]: I0224 09:14:33.386200 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-txwst"] Feb 24 09:14:33 crc kubenswrapper[4822]: I0224 09:14:33.387660 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-txwst" Feb 24 09:14:33 crc kubenswrapper[4822]: I0224 09:14:33.389783 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 24 09:14:33 crc kubenswrapper[4822]: I0224 09:14:33.396459 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-txwst"] Feb 24 09:14:33 crc kubenswrapper[4822]: I0224 09:14:33.448578 4822 generic.go:334] "Generic (PLEG): container finished" podID="be77cf2b-3af6-479e-aa39-ecd15bd3af40" containerID="e5bc00e28680bb1e2ad111e8c9addfb92288abf665334b1838ad5e5cdb0a170d" exitCode=0 Feb 24 09:14:33 crc kubenswrapper[4822]: I0224 09:14:33.448628 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brm9b" event={"ID":"be77cf2b-3af6-479e-aa39-ecd15bd3af40","Type":"ContainerDied","Data":"e5bc00e28680bb1e2ad111e8c9addfb92288abf665334b1838ad5e5cdb0a170d"} Feb 24 09:14:33 crc kubenswrapper[4822]: I0224 09:14:33.448652 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brm9b" event={"ID":"be77cf2b-3af6-479e-aa39-ecd15bd3af40","Type":"ContainerStarted","Data":"0c0bb63114b244b7582171594f7caef85f1f9e0cb72c505dbb75a497311836ed"} Feb 24 09:14:33 crc kubenswrapper[4822]: I0224 09:14:33.539889 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kh7s\" (UniqueName: \"kubernetes.io/projected/1de2c1f9-3087-443b-9ad5-8a988a435c75-kube-api-access-6kh7s\") pod \"redhat-operators-txwst\" (UID: \"1de2c1f9-3087-443b-9ad5-8a988a435c75\") " pod="openshift-marketplace/redhat-operators-txwst" Feb 24 09:14:33 crc kubenswrapper[4822]: I0224 09:14:33.539979 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1de2c1f9-3087-443b-9ad5-8a988a435c75-catalog-content\") pod \"redhat-operators-txwst\" (UID: \"1de2c1f9-3087-443b-9ad5-8a988a435c75\") " pod="openshift-marketplace/redhat-operators-txwst" Feb 24 09:14:33 crc kubenswrapper[4822]: I0224 09:14:33.540017 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1de2c1f9-3087-443b-9ad5-8a988a435c75-utilities\") pod \"redhat-operators-txwst\" (UID: \"1de2c1f9-3087-443b-9ad5-8a988a435c75\") " pod="openshift-marketplace/redhat-operators-txwst" Feb 24 09:14:33 crc kubenswrapper[4822]: I0224 09:14:33.640821 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kh7s\" (UniqueName: \"kubernetes.io/projected/1de2c1f9-3087-443b-9ad5-8a988a435c75-kube-api-access-6kh7s\") pod \"redhat-operators-txwst\" (UID: \"1de2c1f9-3087-443b-9ad5-8a988a435c75\") " pod="openshift-marketplace/redhat-operators-txwst" Feb 24 09:14:33 crc kubenswrapper[4822]: I0224 09:14:33.640901 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1de2c1f9-3087-443b-9ad5-8a988a435c75-catalog-content\") pod \"redhat-operators-txwst\" (UID: \"1de2c1f9-3087-443b-9ad5-8a988a435c75\") " pod="openshift-marketplace/redhat-operators-txwst" Feb 24 09:14:33 crc kubenswrapper[4822]: I0224 09:14:33.640944 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1de2c1f9-3087-443b-9ad5-8a988a435c75-utilities\") pod \"redhat-operators-txwst\" (UID: \"1de2c1f9-3087-443b-9ad5-8a988a435c75\") " pod="openshift-marketplace/redhat-operators-txwst" Feb 24 09:14:33 crc kubenswrapper[4822]: I0224 09:14:33.641803 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1de2c1f9-3087-443b-9ad5-8a988a435c75-utilities\") pod \"redhat-operators-txwst\" (UID: \"1de2c1f9-3087-443b-9ad5-8a988a435c75\") " pod="openshift-marketplace/redhat-operators-txwst" Feb 24 09:14:33 crc kubenswrapper[4822]: I0224 09:14:33.641971 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1de2c1f9-3087-443b-9ad5-8a988a435c75-catalog-content\") pod \"redhat-operators-txwst\" (UID: \"1de2c1f9-3087-443b-9ad5-8a988a435c75\") " pod="openshift-marketplace/redhat-operators-txwst" Feb 24 09:14:33 crc kubenswrapper[4822]: I0224 09:14:33.661308 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kh7s\" (UniqueName: \"kubernetes.io/projected/1de2c1f9-3087-443b-9ad5-8a988a435c75-kube-api-access-6kh7s\") pod \"redhat-operators-txwst\" (UID: \"1de2c1f9-3087-443b-9ad5-8a988a435c75\") " pod="openshift-marketplace/redhat-operators-txwst" Feb 24 09:14:33 crc kubenswrapper[4822]: I0224 09:14:33.723374 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-txwst" Feb 24 09:14:33 crc kubenswrapper[4822]: I0224 09:14:33.953198 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-txwst"] Feb 24 09:14:33 crc kubenswrapper[4822]: W0224 09:14:33.963381 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1de2c1f9_3087_443b_9ad5_8a988a435c75.slice/crio-d3fb7984ae0908cdf680bc71f482074deaa16b1e34c2b29788d92cfe64f4c695 WatchSource:0}: Error finding container d3fb7984ae0908cdf680bc71f482074deaa16b1e34c2b29788d92cfe64f4c695: Status 404 returned error can't find the container with id d3fb7984ae0908cdf680bc71f482074deaa16b1e34c2b29788d92cfe64f4c695 Feb 24 09:14:34 crc kubenswrapper[4822]: I0224 09:14:34.459238 4822 generic.go:334] "Generic (PLEG): container finished" podID="be77cf2b-3af6-479e-aa39-ecd15bd3af40" containerID="f7fe7c4ca09f2cacdc56f1145495bdddafae04c2f34199b266b412fe9f7c81d6" exitCode=0 Feb 24 09:14:34 crc kubenswrapper[4822]: I0224 09:14:34.459411 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brm9b" event={"ID":"be77cf2b-3af6-479e-aa39-ecd15bd3af40","Type":"ContainerDied","Data":"f7fe7c4ca09f2cacdc56f1145495bdddafae04c2f34199b266b412fe9f7c81d6"} Feb 24 09:14:34 crc kubenswrapper[4822]: I0224 09:14:34.461737 4822 generic.go:334] "Generic (PLEG): container finished" podID="1de2c1f9-3087-443b-9ad5-8a988a435c75" containerID="6dd4d53d691292cc1eee85e821d6912520cf5439deb968a56358ecd642387ff2" exitCode=0 Feb 24 09:14:34 crc kubenswrapper[4822]: I0224 09:14:34.462283 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txwst" event={"ID":"1de2c1f9-3087-443b-9ad5-8a988a435c75","Type":"ContainerDied","Data":"6dd4d53d691292cc1eee85e821d6912520cf5439deb968a56358ecd642387ff2"} Feb 24 09:14:34 crc kubenswrapper[4822]: I0224 09:14:34.462371 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txwst" event={"ID":"1de2c1f9-3087-443b-9ad5-8a988a435c75","Type":"ContainerStarted","Data":"d3fb7984ae0908cdf680bc71f482074deaa16b1e34c2b29788d92cfe64f4c695"} Feb 24 09:14:34 crc kubenswrapper[4822]: I0224 09:14:34.466468 4822 generic.go:334] "Generic (PLEG): container finished" podID="aa5e9a4c-06a4-4346-bebf-e7fe5df6b3b4" containerID="43524620c11fc2d2d4a288b8f1aa75dabe576ea5c5146ce64d6c1872015ef1d4" exitCode=0 Feb 24 09:14:34 crc kubenswrapper[4822]: I0224 09:14:34.466505 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zjmh" event={"ID":"aa5e9a4c-06a4-4346-bebf-e7fe5df6b3b4","Type":"ContainerDied","Data":"43524620c11fc2d2d4a288b8f1aa75dabe576ea5c5146ce64d6c1872015ef1d4"} Feb 24 09:14:34 crc kubenswrapper[4822]: I0224 09:14:34.798323 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-chshb"] Feb 24 09:14:34 crc kubenswrapper[4822]: I0224 09:14:34.799367 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-chshb" Feb 24 09:14:34 crc kubenswrapper[4822]: I0224 09:14:34.801399 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 24 09:14:34 crc kubenswrapper[4822]: I0224 09:14:34.806217 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-chshb"] Feb 24 09:14:34 crc kubenswrapper[4822]: I0224 09:14:34.957604 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66ffa1e1-3d61-4458-a48b-5364bcce0b29-utilities\") pod \"certified-operators-chshb\" (UID: \"66ffa1e1-3d61-4458-a48b-5364bcce0b29\") " pod="openshift-marketplace/certified-operators-chshb" Feb 24 09:14:34 crc kubenswrapper[4822]: I0224 09:14:34.957663 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66ffa1e1-3d61-4458-a48b-5364bcce0b29-catalog-content\") pod \"certified-operators-chshb\" (UID: \"66ffa1e1-3d61-4458-a48b-5364bcce0b29\") " pod="openshift-marketplace/certified-operators-chshb" Feb 24 09:14:34 crc kubenswrapper[4822]: I0224 09:14:34.957717 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pb9h\" (UniqueName: \"kubernetes.io/projected/66ffa1e1-3d61-4458-a48b-5364bcce0b29-kube-api-access-6pb9h\") pod \"certified-operators-chshb\" (UID: \"66ffa1e1-3d61-4458-a48b-5364bcce0b29\") " pod="openshift-marketplace/certified-operators-chshb" Feb 24 09:14:35 crc kubenswrapper[4822]: I0224 09:14:35.059214 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66ffa1e1-3d61-4458-a48b-5364bcce0b29-utilities\") pod \"certified-operators-chshb\" (UID: \"66ffa1e1-3d61-4458-a48b-5364bcce0b29\") " pod="openshift-marketplace/certified-operators-chshb" Feb 24 09:14:35 crc kubenswrapper[4822]: I0224 09:14:35.059425 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66ffa1e1-3d61-4458-a48b-5364bcce0b29-catalog-content\") pod \"certified-operators-chshb\" (UID: \"66ffa1e1-3d61-4458-a48b-5364bcce0b29\") " pod="openshift-marketplace/certified-operators-chshb" Feb 24 09:14:35 crc kubenswrapper[4822]: I0224 09:14:35.059510 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pb9h\" (UniqueName: \"kubernetes.io/projected/66ffa1e1-3d61-4458-a48b-5364bcce0b29-kube-api-access-6pb9h\") pod \"certified-operators-chshb\" (UID: \"66ffa1e1-3d61-4458-a48b-5364bcce0b29\") " pod="openshift-marketplace/certified-operators-chshb" Feb 24 09:14:35 crc kubenswrapper[4822]: I0224 09:14:35.059866 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66ffa1e1-3d61-4458-a48b-5364bcce0b29-utilities\") pod \"certified-operators-chshb\" (UID: \"66ffa1e1-3d61-4458-a48b-5364bcce0b29\") " pod="openshift-marketplace/certified-operators-chshb" Feb 24 09:14:35 crc kubenswrapper[4822]: I0224 09:14:35.060313 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66ffa1e1-3d61-4458-a48b-5364bcce0b29-catalog-content\") pod \"certified-operators-chshb\" (UID: \"66ffa1e1-3d61-4458-a48b-5364bcce0b29\") " pod="openshift-marketplace/certified-operators-chshb" Feb 24 09:14:35 crc kubenswrapper[4822]: I0224 09:14:35.092833 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pb9h\" (UniqueName: \"kubernetes.io/projected/66ffa1e1-3d61-4458-a48b-5364bcce0b29-kube-api-access-6pb9h\") pod \"certified-operators-chshb\" (UID: \"66ffa1e1-3d61-4458-a48b-5364bcce0b29\") " pod="openshift-marketplace/certified-operators-chshb" Feb 24 09:14:35 crc kubenswrapper[4822]: I0224 09:14:35.131028 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-chshb" Feb 24 09:14:35 crc kubenswrapper[4822]: I0224 09:14:35.543497 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-chshb"] Feb 24 09:14:35 crc kubenswrapper[4822]: W0224 09:14:35.550658 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66ffa1e1_3d61_4458_a48b_5364bcce0b29.slice/crio-9102da19bd544a8cc41b9155bcfefad88210f1ef433061426d47465162af7e7c WatchSource:0}: Error finding container 9102da19bd544a8cc41b9155bcfefad88210f1ef433061426d47465162af7e7c: Status 404 returned error can't find the container with id 9102da19bd544a8cc41b9155bcfefad88210f1ef433061426d47465162af7e7c Feb 24 09:14:36 crc kubenswrapper[4822]: I0224 09:14:36.477575 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zjmh" event={"ID":"aa5e9a4c-06a4-4346-bebf-e7fe5df6b3b4","Type":"ContainerStarted","Data":"24327dd1b10574a35eb949b2659ee8e463155717d7e54fcd594fa34d52e63b2b"} Feb 24 09:14:36 crc kubenswrapper[4822]: I0224 09:14:36.486764 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brm9b" event={"ID":"be77cf2b-3af6-479e-aa39-ecd15bd3af40","Type":"ContainerStarted","Data":"2b26120a4797e1844b3c2bb7153d9b9b9f2387bf6568912a02a235283ad27d59"} Feb 24 09:14:36 crc kubenswrapper[4822]: I0224 09:14:36.491709 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6zjmh" podStartSLOduration=3.7302744 podStartE2EDuration="6.491692924s" podCreationTimestamp="2026-02-24 09:14:30 +0000 UTC" firstStartedPulling="2026-02-24 09:14:32.418550522 +0000 UTC m=+394.806313060" lastFinishedPulling="2026-02-24 09:14:35.179968996 +0000 UTC m=+397.567731584" observedRunningTime="2026-02-24 09:14:36.491038266 +0000 UTC m=+398.878800834" watchObservedRunningTime="2026-02-24 09:14:36.491692924 +0000 UTC m=+398.879455472" Feb 24 09:14:36 crc kubenswrapper[4822]: I0224 09:14:36.497040 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txwst" event={"ID":"1de2c1f9-3087-443b-9ad5-8a988a435c75","Type":"ContainerStarted","Data":"45c4af28071e054d9b8bb08556a12c7bdb74d6a95d306a3166cdc2f2a97f380e"} Feb 24 09:14:36 crc kubenswrapper[4822]: I0224 09:14:36.498391 4822 generic.go:334] "Generic (PLEG): container finished" podID="66ffa1e1-3d61-4458-a48b-5364bcce0b29" containerID="eb975422bd31a207fef3238c881c96d668f2729b23bc32bfb955a0e68b3b420a" exitCode=0 Feb 24 09:14:36 crc kubenswrapper[4822]: I0224 09:14:36.498436 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chshb" event={"ID":"66ffa1e1-3d61-4458-a48b-5364bcce0b29","Type":"ContainerDied","Data":"eb975422bd31a207fef3238c881c96d668f2729b23bc32bfb955a0e68b3b420a"} Feb 24 09:14:36 crc kubenswrapper[4822]: I0224 09:14:36.498460 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chshb" event={"ID":"66ffa1e1-3d61-4458-a48b-5364bcce0b29","Type":"ContainerStarted","Data":"9102da19bd544a8cc41b9155bcfefad88210f1ef433061426d47465162af7e7c"} Feb 24 09:14:36 crc kubenswrapper[4822]: I0224 09:14:36.509062 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-brm9b" podStartSLOduration=3.092092743 podStartE2EDuration="4.509042576s" podCreationTimestamp="2026-02-24 09:14:32 +0000 UTC" firstStartedPulling="2026-02-24 09:14:33.453901218 +0000 UTC m=+395.841663766" lastFinishedPulling="2026-02-24 09:14:34.870851051 +0000 UTC m=+397.258613599" observedRunningTime="2026-02-24 09:14:36.505707265 +0000 UTC m=+398.893469823" watchObservedRunningTime="2026-02-24 09:14:36.509042576 +0000 UTC m=+398.896805144" Feb 24 09:14:37 crc kubenswrapper[4822]: I0224 09:14:37.505523 4822 generic.go:334] "Generic (PLEG): container finished" podID="1de2c1f9-3087-443b-9ad5-8a988a435c75" containerID="45c4af28071e054d9b8bb08556a12c7bdb74d6a95d306a3166cdc2f2a97f380e" exitCode=0 Feb 24 09:14:37 crc kubenswrapper[4822]: I0224 09:14:37.505660 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txwst" event={"ID":"1de2c1f9-3087-443b-9ad5-8a988a435c75","Type":"ContainerDied","Data":"45c4af28071e054d9b8bb08556a12c7bdb74d6a95d306a3166cdc2f2a97f380e"} Feb 24 09:14:37 crc kubenswrapper[4822]: I0224 09:14:37.508429 4822 generic.go:334] "Generic (PLEG): container finished" podID="66ffa1e1-3d61-4458-a48b-5364bcce0b29" containerID="55158ac8f28f433602b6fded96d590b90935dda339cf75847693752eff5f2c17" exitCode=0 Feb 24 09:14:37 crc kubenswrapper[4822]: I0224 09:14:37.509030 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chshb" event={"ID":"66ffa1e1-3d61-4458-a48b-5364bcce0b29","Type":"ContainerDied","Data":"55158ac8f28f433602b6fded96d590b90935dda339cf75847693752eff5f2c17"} Feb 24 09:14:38 crc kubenswrapper[4822]: I0224 09:14:38.531376 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chshb" event={"ID":"66ffa1e1-3d61-4458-a48b-5364bcce0b29","Type":"ContainerStarted","Data":"2ec7e5441acdec59eed9e5a46e3ae9d2b2a26b9140084ff3a09120373a48d1bc"} Feb 24 09:14:38 crc kubenswrapper[4822]: I0224 09:14:38.559338 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-chshb" podStartSLOduration=3.035911388 podStartE2EDuration="4.55932244s" podCreationTimestamp="2026-02-24 09:14:34 +0000 UTC" firstStartedPulling="2026-02-24 09:14:36.49963703 +0000 UTC m=+398.887399578" lastFinishedPulling="2026-02-24 09:14:38.023048042 +0000 UTC m=+400.410810630" observedRunningTime="2026-02-24 09:14:38.557872131 +0000 UTC m=+400.945634689" watchObservedRunningTime="2026-02-24 09:14:38.55932244 +0000 UTC m=+400.947084998" Feb 24 09:14:39 crc kubenswrapper[4822]: I0224 09:14:39.538987 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txwst" event={"ID":"1de2c1f9-3087-443b-9ad5-8a988a435c75","Type":"ContainerStarted","Data":"b716e079f509a96487c89f92c0cac4b689610cc4a2a79286ee9231794d703157"} Feb 24 09:14:39 crc kubenswrapper[4822]: I0224 09:14:39.566239 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-txwst" podStartSLOduration=2.56950581 podStartE2EDuration="6.566217801s" podCreationTimestamp="2026-02-24 09:14:33 +0000 UTC" firstStartedPulling="2026-02-24 09:14:34.463340817 +0000 UTC m=+396.851103385" lastFinishedPulling="2026-02-24 09:14:38.460052828 +0000 UTC m=+400.847815376" observedRunningTime="2026-02-24 09:14:39.559968661 +0000 UTC m=+401.947731209" watchObservedRunningTime="2026-02-24 09:14:39.566217801 +0000 UTC m=+401.953980369" Feb 24 09:14:41 crc kubenswrapper[4822]: I0224 09:14:41.323565 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6zjmh" Feb 24 09:14:41 crc kubenswrapper[4822]: I0224 09:14:41.323618 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6zjmh" Feb 24 09:14:41 crc kubenswrapper[4822]: I0224 09:14:41.381309 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6zjmh" Feb 24 09:14:41 crc kubenswrapper[4822]: I0224 09:14:41.616190 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6zjmh" Feb 24 09:14:41 crc kubenswrapper[4822]: I0224 09:14:41.823370 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-rdwk8" Feb 24 09:14:41 crc kubenswrapper[4822]: I0224 09:14:41.887371 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cjcmh"] Feb 24 09:14:42 crc kubenswrapper[4822]: I0224 09:14:42.716421 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-brm9b" Feb 24 09:14:42 crc kubenswrapper[4822]: I0224 09:14:42.716481 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-brm9b" Feb 24 09:14:42 crc kubenswrapper[4822]: I0224 09:14:42.764658 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-brm9b" Feb 24 09:14:43 crc kubenswrapper[4822]: I0224 09:14:43.617593 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-brm9b" Feb 24 09:14:43 crc kubenswrapper[4822]: I0224 09:14:43.723790 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-txwst" Feb 24 09:14:43 crc kubenswrapper[4822]: I0224 09:14:43.723828 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-txwst" Feb 24 09:14:44 crc kubenswrapper[4822]: I0224 09:14:44.795735 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-txwst" podUID="1de2c1f9-3087-443b-9ad5-8a988a435c75" containerName="registry-server" probeResult="failure" output=< Feb 24 09:14:44 crc kubenswrapper[4822]: timeout: failed to connect service ":50051" within 1s Feb 24 09:14:44 crc kubenswrapper[4822]: > Feb 24 09:14:45 crc kubenswrapper[4822]: I0224 09:14:45.132513 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-chshb" Feb 24 09:14:45 crc kubenswrapper[4822]: I0224 09:14:45.132826 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-chshb" Feb 24 09:14:45 crc kubenswrapper[4822]: I0224 09:14:45.187895 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-chshb" Feb 24 09:14:45 crc kubenswrapper[4822]: I0224 09:14:45.614165 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-chshb" Feb 24 09:14:45 crc kubenswrapper[4822]: I0224 09:14:45.676725 4822 patch_prober.go:28] interesting pod/machine-config-daemon-qd752 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:14:45 crc kubenswrapper[4822]: I0224 09:14:45.676845 4822 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:14:53 crc kubenswrapper[4822]: I0224 09:14:53.785717 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-txwst" Feb 24 09:14:53 crc kubenswrapper[4822]: I0224 09:14:53.832469 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-txwst" Feb 24 09:15:00 crc kubenswrapper[4822]: I0224 09:15:00.207717 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532075-8zpfk"] Feb 24 09:15:00 crc kubenswrapper[4822]: I0224 09:15:00.211486 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532075-8zpfk" Feb 24 09:15:00 crc kubenswrapper[4822]: I0224 09:15:00.217837 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532075-8zpfk"] Feb 24 09:15:00 crc kubenswrapper[4822]: I0224 09:15:00.269473 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 24 09:15:00 crc kubenswrapper[4822]: I0224 09:15:00.269699 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 24 09:15:00 crc kubenswrapper[4822]: I0224 09:15:00.313352 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bda61b9a-bb37-43aa-87a1-00f9182d98ec-secret-volume\") pod \"collect-profiles-29532075-8zpfk\" (UID: \"bda61b9a-bb37-43aa-87a1-00f9182d98ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532075-8zpfk" Feb 24 09:15:00 crc kubenswrapper[4822]: I0224 09:15:00.313447 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bda61b9a-bb37-43aa-87a1-00f9182d98ec-config-volume\") pod \"collect-profiles-29532075-8zpfk\" (UID: \"bda61b9a-bb37-43aa-87a1-00f9182d98ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532075-8zpfk" Feb 24 09:15:00 crc kubenswrapper[4822]: I0224 09:15:00.313525 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knld4\" (UniqueName: \"kubernetes.io/projected/bda61b9a-bb37-43aa-87a1-00f9182d98ec-kube-api-access-knld4\") pod \"collect-profiles-29532075-8zpfk\" (UID: \"bda61b9a-bb37-43aa-87a1-00f9182d98ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532075-8zpfk" Feb 24 09:15:00 crc kubenswrapper[4822]: I0224 09:15:00.415655 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knld4\" (UniqueName: \"kubernetes.io/projected/bda61b9a-bb37-43aa-87a1-00f9182d98ec-kube-api-access-knld4\") pod \"collect-profiles-29532075-8zpfk\" (UID: \"bda61b9a-bb37-43aa-87a1-00f9182d98ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532075-8zpfk" Feb 24 09:15:00 crc kubenswrapper[4822]: I0224 09:15:00.415871 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bda61b9a-bb37-43aa-87a1-00f9182d98ec-secret-volume\") pod \"collect-profiles-29532075-8zpfk\" (UID: \"bda61b9a-bb37-43aa-87a1-00f9182d98ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532075-8zpfk" Feb 24 09:15:00 crc kubenswrapper[4822]: I0224 09:15:00.416129 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bda61b9a-bb37-43aa-87a1-00f9182d98ec-config-volume\") pod \"collect-profiles-29532075-8zpfk\" (UID: \"bda61b9a-bb37-43aa-87a1-00f9182d98ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532075-8zpfk" Feb 24 09:15:00 crc kubenswrapper[4822]: I0224 09:15:00.417390 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bda61b9a-bb37-43aa-87a1-00f9182d98ec-config-volume\") pod \"collect-profiles-29532075-8zpfk\" (UID: \"bda61b9a-bb37-43aa-87a1-00f9182d98ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532075-8zpfk" Feb 24 09:15:00 crc kubenswrapper[4822]: I0224 09:15:00.425586 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bda61b9a-bb37-43aa-87a1-00f9182d98ec-secret-volume\") pod \"collect-profiles-29532075-8zpfk\" (UID: \"bda61b9a-bb37-43aa-87a1-00f9182d98ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532075-8zpfk" Feb 24 09:15:00 crc kubenswrapper[4822]: I0224 09:15:00.439031 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knld4\" (UniqueName: \"kubernetes.io/projected/bda61b9a-bb37-43aa-87a1-00f9182d98ec-kube-api-access-knld4\") pod \"collect-profiles-29532075-8zpfk\" (UID: \"bda61b9a-bb37-43aa-87a1-00f9182d98ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532075-8zpfk" Feb 24 09:15:00 crc kubenswrapper[4822]: I0224 09:15:00.583750 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532075-8zpfk" Feb 24 09:15:01 crc kubenswrapper[4822]: I0224 09:15:01.056372 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532075-8zpfk"] Feb 24 09:15:01 crc kubenswrapper[4822]: W0224 09:15:01.067055 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbda61b9a_bb37_43aa_87a1_00f9182d98ec.slice/crio-fdd80df3bb30457675463afd714257633727830e56edff7bd842b766cd98bc55 WatchSource:0}: Error finding container fdd80df3bb30457675463afd714257633727830e56edff7bd842b766cd98bc55: Status 404 returned error can't find the container with id fdd80df3bb30457675463afd714257633727830e56edff7bd842b766cd98bc55 Feb 24 09:15:01 crc kubenswrapper[4822]: I0224 09:15:01.682700 4822 generic.go:334] "Generic (PLEG): container finished" podID="bda61b9a-bb37-43aa-87a1-00f9182d98ec" containerID="e16dc8a7d6918d7b5591aff756005bdc24d10677beee7d165800c3fdcc948d5a" exitCode=0 Feb 24 09:15:01 crc kubenswrapper[4822]: I0224 09:15:01.682801 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532075-8zpfk" event={"ID":"bda61b9a-bb37-43aa-87a1-00f9182d98ec","Type":"ContainerDied","Data":"e16dc8a7d6918d7b5591aff756005bdc24d10677beee7d165800c3fdcc948d5a"} Feb 24 09:15:01 crc kubenswrapper[4822]: I0224 09:15:01.682847 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532075-8zpfk" event={"ID":"bda61b9a-bb37-43aa-87a1-00f9182d98ec","Type":"ContainerStarted","Data":"fdd80df3bb30457675463afd714257633727830e56edff7bd842b766cd98bc55"} Feb 24 09:15:02 crc kubenswrapper[4822]: I0224 09:15:02.968233 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532075-8zpfk" Feb 24 09:15:03 crc kubenswrapper[4822]: I0224 09:15:03.050976 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knld4\" (UniqueName: \"kubernetes.io/projected/bda61b9a-bb37-43aa-87a1-00f9182d98ec-kube-api-access-knld4\") pod \"bda61b9a-bb37-43aa-87a1-00f9182d98ec\" (UID: \"bda61b9a-bb37-43aa-87a1-00f9182d98ec\") " Feb 24 09:15:03 crc kubenswrapper[4822]: I0224 09:15:03.051020 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bda61b9a-bb37-43aa-87a1-00f9182d98ec-secret-volume\") pod \"bda61b9a-bb37-43aa-87a1-00f9182d98ec\" (UID: \"bda61b9a-bb37-43aa-87a1-00f9182d98ec\") " Feb 24 09:15:03 crc kubenswrapper[4822]: I0224 09:15:03.051056 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bda61b9a-bb37-43aa-87a1-00f9182d98ec-config-volume\") pod \"bda61b9a-bb37-43aa-87a1-00f9182d98ec\" (UID: \"bda61b9a-bb37-43aa-87a1-00f9182d98ec\") " Feb 24 09:15:03 crc kubenswrapper[4822]: I0224 09:15:03.051992 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bda61b9a-bb37-43aa-87a1-00f9182d98ec-config-volume" (OuterVolumeSpecName: "config-volume") pod "bda61b9a-bb37-43aa-87a1-00f9182d98ec" (UID: "bda61b9a-bb37-43aa-87a1-00f9182d98ec"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:15:03 crc kubenswrapper[4822]: I0224 09:15:03.057585 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bda61b9a-bb37-43aa-87a1-00f9182d98ec-kube-api-access-knld4" (OuterVolumeSpecName: "kube-api-access-knld4") pod "bda61b9a-bb37-43aa-87a1-00f9182d98ec" (UID: "bda61b9a-bb37-43aa-87a1-00f9182d98ec"). InnerVolumeSpecName "kube-api-access-knld4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:15:03 crc kubenswrapper[4822]: I0224 09:15:03.059449 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bda61b9a-bb37-43aa-87a1-00f9182d98ec-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bda61b9a-bb37-43aa-87a1-00f9182d98ec" (UID: "bda61b9a-bb37-43aa-87a1-00f9182d98ec"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:15:03 crc kubenswrapper[4822]: I0224 09:15:03.153025 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knld4\" (UniqueName: \"kubernetes.io/projected/bda61b9a-bb37-43aa-87a1-00f9182d98ec-kube-api-access-knld4\") on node \"crc\" DevicePath \"\"" Feb 24 09:15:03 crc kubenswrapper[4822]: I0224 09:15:03.153083 4822 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bda61b9a-bb37-43aa-87a1-00f9182d98ec-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 24 09:15:03 crc kubenswrapper[4822]: I0224 09:15:03.153102 4822 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bda61b9a-bb37-43aa-87a1-00f9182d98ec-config-volume\") on node \"crc\" DevicePath \"\"" Feb 24 09:15:03 crc kubenswrapper[4822]: I0224 09:15:03.696857 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532075-8zpfk" event={"ID":"bda61b9a-bb37-43aa-87a1-00f9182d98ec","Type":"ContainerDied","Data":"fdd80df3bb30457675463afd714257633727830e56edff7bd842b766cd98bc55"} Feb 24 09:15:03 crc kubenswrapper[4822]: I0224 09:15:03.696944 4822 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdd80df3bb30457675463afd714257633727830e56edff7bd842b766cd98bc55" Feb 24 09:15:03 crc kubenswrapper[4822]: I0224 09:15:03.696983 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532075-8zpfk" Feb 24 09:15:06 crc kubenswrapper[4822]: I0224 09:15:06.918750 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" podUID="f9ca89b3-e69d-4443-9e13-10ec52c688e5" containerName="registry" containerID="cri-o://a21f7c3c19d8509708b1d781b56e753a83da9fcbaa9b03b0237e3d30fdc1995a" gracePeriod=30 Feb 24 09:15:07 crc kubenswrapper[4822]: I0224 09:15:07.381236 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:15:07 crc kubenswrapper[4822]: I0224 09:15:07.524450 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9ca89b3-e69d-4443-9e13-10ec52c688e5-installation-pull-secrets\") pod \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " Feb 24 09:15:07 crc kubenswrapper[4822]: I0224 09:15:07.524536 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9ca89b3-e69d-4443-9e13-10ec52c688e5-registry-tls\") pod \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " Feb 24 09:15:07 crc kubenswrapper[4822]: I0224 09:15:07.524569 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9ca89b3-e69d-4443-9e13-10ec52c688e5-bound-sa-token\") pod \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " Feb 24 09:15:07 crc kubenswrapper[4822]: I0224 09:15:07.524658 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9ca89b3-e69d-4443-9e13-10ec52c688e5-registry-certificates\") pod \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " Feb 24 09:15:07 crc kubenswrapper[4822]: I0224 09:15:07.524976 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " Feb 24 09:15:07 crc kubenswrapper[4822]: I0224 09:15:07.525049 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9ca89b3-e69d-4443-9e13-10ec52c688e5-ca-trust-extracted\") pod \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " Feb 24 09:15:07 crc kubenswrapper[4822]: I0224 09:15:07.525109 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cp2l\" (UniqueName: \"kubernetes.io/projected/f9ca89b3-e69d-4443-9e13-10ec52c688e5-kube-api-access-2cp2l\") pod \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " Feb 24 09:15:07 crc kubenswrapper[4822]: I0224 09:15:07.525142 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9ca89b3-e69d-4443-9e13-10ec52c688e5-trusted-ca\") pod \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\" (UID: \"f9ca89b3-e69d-4443-9e13-10ec52c688e5\") " Feb 24 09:15:07 crc kubenswrapper[4822]: I0224 09:15:07.526726 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9ca89b3-e69d-4443-9e13-10ec52c688e5-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "f9ca89b3-e69d-4443-9e13-10ec52c688e5" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:15:07 crc kubenswrapper[4822]: I0224 09:15:07.527038 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9ca89b3-e69d-4443-9e13-10ec52c688e5-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "f9ca89b3-e69d-4443-9e13-10ec52c688e5" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:15:07 crc kubenswrapper[4822]: I0224 09:15:07.533787 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9ca89b3-e69d-4443-9e13-10ec52c688e5-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "f9ca89b3-e69d-4443-9e13-10ec52c688e5" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:15:07 crc kubenswrapper[4822]: I0224 09:15:07.534555 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9ca89b3-e69d-4443-9e13-10ec52c688e5-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "f9ca89b3-e69d-4443-9e13-10ec52c688e5" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:15:07 crc kubenswrapper[4822]: I0224 09:15:07.537669 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9ca89b3-e69d-4443-9e13-10ec52c688e5-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "f9ca89b3-e69d-4443-9e13-10ec52c688e5" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:15:07 crc kubenswrapper[4822]: I0224 09:15:07.538286 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9ca89b3-e69d-4443-9e13-10ec52c688e5-kube-api-access-2cp2l" (OuterVolumeSpecName: "kube-api-access-2cp2l") pod "f9ca89b3-e69d-4443-9e13-10ec52c688e5" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5"). InnerVolumeSpecName "kube-api-access-2cp2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:15:07 crc kubenswrapper[4822]: I0224 09:15:07.539632 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "f9ca89b3-e69d-4443-9e13-10ec52c688e5" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 24 09:15:07 crc kubenswrapper[4822]: I0224 09:15:07.560553 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9ca89b3-e69d-4443-9e13-10ec52c688e5-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "f9ca89b3-e69d-4443-9e13-10ec52c688e5" (UID: "f9ca89b3-e69d-4443-9e13-10ec52c688e5"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:15:07 crc kubenswrapper[4822]: I0224 09:15:07.627596 4822 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9ca89b3-e69d-4443-9e13-10ec52c688e5-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 24 09:15:07 crc kubenswrapper[4822]: I0224 09:15:07.627650 4822 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9ca89b3-e69d-4443-9e13-10ec52c688e5-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 24 09:15:07 crc kubenswrapper[4822]: I0224 09:15:07.627672 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cp2l\" (UniqueName: \"kubernetes.io/projected/f9ca89b3-e69d-4443-9e13-10ec52c688e5-kube-api-access-2cp2l\") on node \"crc\" DevicePath \"\"" Feb 24 09:15:07 crc kubenswrapper[4822]: I0224 09:15:07.627689 4822 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9ca89b3-e69d-4443-9e13-10ec52c688e5-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:15:07 crc kubenswrapper[4822]: I0224 09:15:07.627710 4822 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9ca89b3-e69d-4443-9e13-10ec52c688e5-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 24 09:15:07 crc kubenswrapper[4822]: I0224 09:15:07.627727 4822 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9ca89b3-e69d-4443-9e13-10ec52c688e5-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 24 09:15:07 crc kubenswrapper[4822]: I0224 09:15:07.627745 4822 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9ca89b3-e69d-4443-9e13-10ec52c688e5-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 24 09:15:07 crc kubenswrapper[4822]: I0224 09:15:07.726676 4822 generic.go:334] "Generic (PLEG): container finished" podID="f9ca89b3-e69d-4443-9e13-10ec52c688e5" containerID="a21f7c3c19d8509708b1d781b56e753a83da9fcbaa9b03b0237e3d30fdc1995a" exitCode=0 Feb 24 09:15:07 crc kubenswrapper[4822]: I0224 09:15:07.726735 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" event={"ID":"f9ca89b3-e69d-4443-9e13-10ec52c688e5","Type":"ContainerDied","Data":"a21f7c3c19d8509708b1d781b56e753a83da9fcbaa9b03b0237e3d30fdc1995a"} Feb 24 09:15:07 crc kubenswrapper[4822]: I0224 09:15:07.726774 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" event={"ID":"f9ca89b3-e69d-4443-9e13-10ec52c688e5","Type":"ContainerDied","Data":"1d121c48b64d459621bc78491870cb2cbbb4bf1c016b042748ba0939b49dd07d"} Feb 24 09:15:07 crc kubenswrapper[4822]: I0224 09:15:07.726804 4822 scope.go:117] "RemoveContainer" containerID="a21f7c3c19d8509708b1d781b56e753a83da9fcbaa9b03b0237e3d30fdc1995a" Feb 24 09:15:07 crc kubenswrapper[4822]: I0224 09:15:07.727577 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-cjcmh" Feb 24 09:15:07 crc kubenswrapper[4822]: I0224 09:15:07.757195 4822 scope.go:117] "RemoveContainer" containerID="a21f7c3c19d8509708b1d781b56e753a83da9fcbaa9b03b0237e3d30fdc1995a" Feb 24 09:15:07 crc kubenswrapper[4822]: E0224 09:15:07.757834 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a21f7c3c19d8509708b1d781b56e753a83da9fcbaa9b03b0237e3d30fdc1995a\": container with ID starting with a21f7c3c19d8509708b1d781b56e753a83da9fcbaa9b03b0237e3d30fdc1995a not found: ID does not exist" containerID="a21f7c3c19d8509708b1d781b56e753a83da9fcbaa9b03b0237e3d30fdc1995a" Feb 24 09:15:07 crc kubenswrapper[4822]: I0224 09:15:07.757956 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a21f7c3c19d8509708b1d781b56e753a83da9fcbaa9b03b0237e3d30fdc1995a"} err="failed to get container status \"a21f7c3c19d8509708b1d781b56e753a83da9fcbaa9b03b0237e3d30fdc1995a\": rpc error: code = NotFound desc = could not find container \"a21f7c3c19d8509708b1d781b56e753a83da9fcbaa9b03b0237e3d30fdc1995a\": container with ID starting with a21f7c3c19d8509708b1d781b56e753a83da9fcbaa9b03b0237e3d30fdc1995a not found: ID does not exist" Feb 24 09:15:07 crc kubenswrapper[4822]: I0224 09:15:07.792875 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cjcmh"] Feb 24 09:15:07 crc kubenswrapper[4822]: I0224 09:15:07.805742 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cjcmh"] Feb 24 09:15:08 crc kubenswrapper[4822]: I0224 09:15:08.352567 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9ca89b3-e69d-4443-9e13-10ec52c688e5" path="/var/lib/kubelet/pods/f9ca89b3-e69d-4443-9e13-10ec52c688e5/volumes" Feb 24 09:15:15 crc kubenswrapper[4822]: I0224 09:15:15.677163 4822 patch_prober.go:28] interesting pod/machine-config-daemon-qd752 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:15:15 crc kubenswrapper[4822]: I0224 09:15:15.677768 4822 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:15:45 crc kubenswrapper[4822]: I0224 09:15:45.676499 4822 patch_prober.go:28] interesting pod/machine-config-daemon-qd752 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:15:45 crc kubenswrapper[4822]: I0224 09:15:45.677113 4822 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:15:45 crc kubenswrapper[4822]: I0224 09:15:45.677174 4822 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd752" Feb 24 09:15:45 crc kubenswrapper[4822]: I0224 09:15:45.677984 4822 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f79b6484854bddb9f85478e90221a31ce3af1ac665d60b2449b51b6b7845fa55"} pod="openshift-machine-config-operator/machine-config-daemon-qd752" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 09:15:45 crc kubenswrapper[4822]: I0224 09:15:45.678082 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" containerID="cri-o://f79b6484854bddb9f85478e90221a31ce3af1ac665d60b2449b51b6b7845fa55" gracePeriod=600 Feb 24 09:15:46 crc kubenswrapper[4822]: I0224 09:15:46.357537 4822 generic.go:334] "Generic (PLEG): container finished" podID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerID="f79b6484854bddb9f85478e90221a31ce3af1ac665d60b2449b51b6b7845fa55" exitCode=0 Feb 24 09:15:46 crc kubenswrapper[4822]: I0224 09:15:46.357634 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" event={"ID":"306aba52-0b6e-4d3f-b05f-757daebc5e24","Type":"ContainerDied","Data":"f79b6484854bddb9f85478e90221a31ce3af1ac665d60b2449b51b6b7845fa55"} Feb 24 09:15:46 crc kubenswrapper[4822]: I0224 09:15:46.358420 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" event={"ID":"306aba52-0b6e-4d3f-b05f-757daebc5e24","Type":"ContainerStarted","Data":"6558fef3da8f529e43059f424d69ba37b66c4e03bf96f9d22590f42fc65711b2"} Feb 24 09:15:46 crc kubenswrapper[4822]: I0224 09:15:46.358458 4822 scope.go:117] "RemoveContainer" containerID="00d72e2a80f6b15e7b1f86206da1682dfae29bb1bf1dcca20ca38556ba3fc2c0" Feb 24 09:17:45 crc kubenswrapper[4822]: I0224 09:17:45.676779 4822 patch_prober.go:28] interesting pod/machine-config-daemon-qd752 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:17:45 crc kubenswrapper[4822]: I0224 09:17:45.677413 4822 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:18:15 crc kubenswrapper[4822]: I0224 09:18:15.676497 4822 patch_prober.go:28] interesting pod/machine-config-daemon-qd752 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:18:15 crc kubenswrapper[4822]: I0224 09:18:15.677413 4822 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:18:45 crc kubenswrapper[4822]: I0224 09:18:45.676571 4822 patch_prober.go:28] interesting pod/machine-config-daemon-qd752 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:18:45 crc kubenswrapper[4822]: I0224 09:18:45.677366 4822 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:18:45 crc kubenswrapper[4822]: I0224 09:18:45.677474 4822 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd752" Feb 24 09:18:45 crc kubenswrapper[4822]: I0224 09:18:45.678484 4822 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6558fef3da8f529e43059f424d69ba37b66c4e03bf96f9d22590f42fc65711b2"} pod="openshift-machine-config-operator/machine-config-daemon-qd752" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 09:18:45 crc kubenswrapper[4822]: I0224 09:18:45.678582 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" containerID="cri-o://6558fef3da8f529e43059f424d69ba37b66c4e03bf96f9d22590f42fc65711b2" gracePeriod=600 Feb 24 09:18:46 crc kubenswrapper[4822]: I0224 09:18:46.631592 4822 generic.go:334] "Generic (PLEG): container finished" podID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerID="6558fef3da8f529e43059f424d69ba37b66c4e03bf96f9d22590f42fc65711b2" exitCode=0 Feb 24 09:18:46 crc kubenswrapper[4822]: I0224 09:18:46.631658 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" event={"ID":"306aba52-0b6e-4d3f-b05f-757daebc5e24","Type":"ContainerDied","Data":"6558fef3da8f529e43059f424d69ba37b66c4e03bf96f9d22590f42fc65711b2"} Feb 24 09:18:46 crc kubenswrapper[4822]: I0224 09:18:46.632393 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" event={"ID":"306aba52-0b6e-4d3f-b05f-757daebc5e24","Type":"ContainerStarted","Data":"f5f1eb1caf8f3fa53d2384fafa78a76cc1e2aaee0a945eb5b651032f65068caf"} Feb 24 09:18:46 crc kubenswrapper[4822]: I0224 09:18:46.632433 4822 scope.go:117] "RemoveContainer" containerID="f79b6484854bddb9f85478e90221a31ce3af1ac665d60b2449b51b6b7845fa55" Feb 24 09:19:19 crc kubenswrapper[4822]: I0224 09:19:19.395047 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49892: no serving certificate available for the kubelet" Feb 24 09:19:19 crc kubenswrapper[4822]: I0224 09:19:19.486297 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xltlj"] Feb 24 09:19:19 crc kubenswrapper[4822]: E0224 09:19:19.486989 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bda61b9a-bb37-43aa-87a1-00f9182d98ec" containerName="collect-profiles" Feb 24 09:19:19 crc kubenswrapper[4822]: I0224 09:19:19.487014 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="bda61b9a-bb37-43aa-87a1-00f9182d98ec" containerName="collect-profiles" Feb 24 09:19:19 crc kubenswrapper[4822]: E0224 09:19:19.487030 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9ca89b3-e69d-4443-9e13-10ec52c688e5" containerName="registry" Feb 24 09:19:19 crc kubenswrapper[4822]: I0224 09:19:19.487039 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9ca89b3-e69d-4443-9e13-10ec52c688e5" containerName="registry" Feb 24 09:19:19 crc kubenswrapper[4822]: I0224 09:19:19.487153 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9ca89b3-e69d-4443-9e13-10ec52c688e5" containerName="registry" Feb 24 09:19:19 crc kubenswrapper[4822]: I0224 09:19:19.487169 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="bda61b9a-bb37-43aa-87a1-00f9182d98ec" containerName="collect-profiles" Feb 24 09:19:19 crc kubenswrapper[4822]: I0224 09:19:19.487579 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xltlj" Feb 24 09:19:19 crc kubenswrapper[4822]: I0224 09:19:19.489658 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 24 09:19:19 crc kubenswrapper[4822]: I0224 09:19:19.490404 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 24 09:19:19 crc kubenswrapper[4822]: I0224 09:19:19.493880 4822 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-ghbmz" Feb 24 09:19:19 crc kubenswrapper[4822]: I0224 09:19:19.495213 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xltlj"] Feb 24 09:19:19 crc kubenswrapper[4822]: I0224 09:19:19.504733 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-crfml"] Feb 24 09:19:19 crc kubenswrapper[4822]: I0224 09:19:19.505509 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-crfml" Feb 24 09:19:19 crc kubenswrapper[4822]: I0224 09:19:19.507904 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-rb8gv"] Feb 24 09:19:19 crc kubenswrapper[4822]: I0224 09:19:19.508370 4822 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-hdjj6" Feb 24 09:19:19 crc kubenswrapper[4822]: I0224 09:19:19.509049 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-rb8gv" Feb 24 09:19:19 crc kubenswrapper[4822]: I0224 09:19:19.514896 4822 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-zwzcg" Feb 24 09:19:19 crc kubenswrapper[4822]: I0224 09:19:19.525989 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-crfml"] Feb 24 09:19:19 crc kubenswrapper[4822]: I0224 09:19:19.539682 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-rb8gv"] Feb 24 09:19:19 crc kubenswrapper[4822]: I0224 09:19:19.611411 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lp9f\" (UniqueName: \"kubernetes.io/projected/6c378113-0650-48a0-99a9-abf43f807c2a-kube-api-access-6lp9f\") pod \"cert-manager-webhook-687f57d79b-rb8gv\" (UID: \"6c378113-0650-48a0-99a9-abf43f807c2a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-rb8gv" Feb 24 09:19:19 crc kubenswrapper[4822]: I0224 09:19:19.611491 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5j5x\" (UniqueName: \"kubernetes.io/projected/a5e69faf-5fbf-41cf-a8cd-7ee0af3976bc-kube-api-access-p5j5x\") pod \"cert-manager-cainjector-cf98fcc89-xltlj\" (UID: \"a5e69faf-5fbf-41cf-a8cd-7ee0af3976bc\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xltlj" Feb 24 09:19:19 crc kubenswrapper[4822]: I0224 09:19:19.611510 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bctj\" (UniqueName: \"kubernetes.io/projected/822ba8c1-a7af-4760-bb31-e4eabc46af10-kube-api-access-6bctj\") pod \"cert-manager-858654f9db-crfml\" (UID: \"822ba8c1-a7af-4760-bb31-e4eabc46af10\") " pod="cert-manager/cert-manager-858654f9db-crfml" Feb 24 09:19:19 crc kubenswrapper[4822]: I0224 09:19:19.712803 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5j5x\" (UniqueName: \"kubernetes.io/projected/a5e69faf-5fbf-41cf-a8cd-7ee0af3976bc-kube-api-access-p5j5x\") pod \"cert-manager-cainjector-cf98fcc89-xltlj\" (UID: \"a5e69faf-5fbf-41cf-a8cd-7ee0af3976bc\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xltlj" Feb 24 09:19:19 crc kubenswrapper[4822]: I0224 09:19:19.713087 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bctj\" (UniqueName: \"kubernetes.io/projected/822ba8c1-a7af-4760-bb31-e4eabc46af10-kube-api-access-6bctj\") pod \"cert-manager-858654f9db-crfml\" (UID: \"822ba8c1-a7af-4760-bb31-e4eabc46af10\") " pod="cert-manager/cert-manager-858654f9db-crfml" Feb 24 09:19:19 crc kubenswrapper[4822]: I0224 09:19:19.713218 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lp9f\" (UniqueName: \"kubernetes.io/projected/6c378113-0650-48a0-99a9-abf43f807c2a-kube-api-access-6lp9f\") pod \"cert-manager-webhook-687f57d79b-rb8gv\" (UID: \"6c378113-0650-48a0-99a9-abf43f807c2a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-rb8gv" Feb 24 09:19:19 crc kubenswrapper[4822]: I0224 09:19:19.731598 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bctj\" (UniqueName: \"kubernetes.io/projected/822ba8c1-a7af-4760-bb31-e4eabc46af10-kube-api-access-6bctj\") pod \"cert-manager-858654f9db-crfml\" (UID: \"822ba8c1-a7af-4760-bb31-e4eabc46af10\") " pod="cert-manager/cert-manager-858654f9db-crfml" Feb 24 09:19:19 crc kubenswrapper[4822]: I0224 09:19:19.732023 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5j5x\" (UniqueName: \"kubernetes.io/projected/a5e69faf-5fbf-41cf-a8cd-7ee0af3976bc-kube-api-access-p5j5x\") pod \"cert-manager-cainjector-cf98fcc89-xltlj\" (UID: \"a5e69faf-5fbf-41cf-a8cd-7ee0af3976bc\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xltlj" Feb 24 09:19:19 crc kubenswrapper[4822]: I0224 09:19:19.737962 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lp9f\" (UniqueName: \"kubernetes.io/projected/6c378113-0650-48a0-99a9-abf43f807c2a-kube-api-access-6lp9f\") pod \"cert-manager-webhook-687f57d79b-rb8gv\" (UID: \"6c378113-0650-48a0-99a9-abf43f807c2a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-rb8gv" Feb 24 09:19:19 crc kubenswrapper[4822]: I0224 09:19:19.815770 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xltlj" Feb 24 09:19:19 crc kubenswrapper[4822]: I0224 09:19:19.826053 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-crfml" Feb 24 09:19:19 crc kubenswrapper[4822]: I0224 09:19:19.840340 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-rb8gv" Feb 24 09:19:20 crc kubenswrapper[4822]: I0224 09:19:20.113703 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-crfml"] Feb 24 09:19:20 crc kubenswrapper[4822]: I0224 09:19:20.124058 4822 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 09:19:20 crc kubenswrapper[4822]: I0224 09:19:20.266234 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xltlj"] Feb 24 09:19:20 crc kubenswrapper[4822]: I0224 09:19:20.269672 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-rb8gv"] Feb 24 09:19:20 crc kubenswrapper[4822]: W0224 09:19:20.275149 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c378113_0650_48a0_99a9_abf43f807c2a.slice/crio-56d549d4cf3067c1d51d5507e3749ed2423a835493a6de5c1723db556a2c4e3f WatchSource:0}: Error finding container 56d549d4cf3067c1d51d5507e3749ed2423a835493a6de5c1723db556a2c4e3f: Status 404 returned error can't find the container with id 56d549d4cf3067c1d51d5507e3749ed2423a835493a6de5c1723db556a2c4e3f Feb 24 09:19:20 crc kubenswrapper[4822]: W0224 09:19:20.277493 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5e69faf_5fbf_41cf_a8cd_7ee0af3976bc.slice/crio-c509acbebb993aedca74805c001bf9c859ab1ef1b435f6f27fc4bd6165dfbcae WatchSource:0}: Error finding container c509acbebb993aedca74805c001bf9c859ab1ef1b435f6f27fc4bd6165dfbcae: Status 404 returned error can't find the container with id c509acbebb993aedca74805c001bf9c859ab1ef1b435f6f27fc4bd6165dfbcae Feb 24 09:19:20 crc kubenswrapper[4822]: I0224 09:19:20.892314 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xltlj" event={"ID":"a5e69faf-5fbf-41cf-a8cd-7ee0af3976bc","Type":"ContainerStarted","Data":"c509acbebb993aedca74805c001bf9c859ab1ef1b435f6f27fc4bd6165dfbcae"} Feb 24 09:19:20 crc kubenswrapper[4822]: I0224 09:19:20.895672 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-rb8gv" event={"ID":"6c378113-0650-48a0-99a9-abf43f807c2a","Type":"ContainerStarted","Data":"56d549d4cf3067c1d51d5507e3749ed2423a835493a6de5c1723db556a2c4e3f"} Feb 24 09:19:20 crc kubenswrapper[4822]: I0224 09:19:20.897368 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-crfml" event={"ID":"822ba8c1-a7af-4760-bb31-e4eabc46af10","Type":"ContainerStarted","Data":"97da7e3d6e174545deedc15ada5a7b3e2a99912535f8709be99d4795640ad36b"} Feb 24 09:19:22 crc kubenswrapper[4822]: I0224 09:19:22.910891 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-crfml" event={"ID":"822ba8c1-a7af-4760-bb31-e4eabc46af10","Type":"ContainerStarted","Data":"7cb1ef1324f43f5ad8609bdfd8168ed10490d2af92e08c2818597f1056255c0e"} Feb 24 09:19:22 crc kubenswrapper[4822]: I0224 09:19:22.928929 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-crfml" podStartSLOduration=1.416332726 podStartE2EDuration="3.928887113s" podCreationTimestamp="2026-02-24 09:19:19 +0000 UTC" firstStartedPulling="2026-02-24 09:19:20.123811067 +0000 UTC m=+682.511573615" lastFinishedPulling="2026-02-24 09:19:22.636365414 +0000 UTC m=+685.024128002" observedRunningTime="2026-02-24 09:19:22.922369232 +0000 UTC m=+685.310131780" watchObservedRunningTime="2026-02-24 09:19:22.928887113 +0000 UTC m=+685.316649661" Feb 24 09:19:24 crc kubenswrapper[4822]: I0224 09:19:24.926251 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xltlj" event={"ID":"a5e69faf-5fbf-41cf-a8cd-7ee0af3976bc","Type":"ContainerStarted","Data":"6bc1852cbdff8140be43824f901ac725f3b6a64973f9236407ea79a063cb18c7"} Feb 24 09:19:24 crc kubenswrapper[4822]: I0224 09:19:24.929425 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-rb8gv" event={"ID":"6c378113-0650-48a0-99a9-abf43f807c2a","Type":"ContainerStarted","Data":"1786388e7dafa20651ec3eb70d44fdbddbb84a316f9b341c73ee6a05a83b58d3"} Feb 24 09:19:24 crc kubenswrapper[4822]: I0224 09:19:24.929719 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-rb8gv" Feb 24 09:19:24 crc kubenswrapper[4822]: I0224 09:19:24.951327 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xltlj" podStartSLOduration=2.052525838 podStartE2EDuration="5.95130648s" podCreationTimestamp="2026-02-24 09:19:19 +0000 UTC" firstStartedPulling="2026-02-24 09:19:20.278713776 +0000 UTC m=+682.666476344" lastFinishedPulling="2026-02-24 09:19:24.177494408 +0000 UTC m=+686.565256986" observedRunningTime="2026-02-24 09:19:24.947551536 +0000 UTC m=+687.335314144" watchObservedRunningTime="2026-02-24 09:19:24.95130648 +0000 UTC m=+687.339069038" Feb 24 09:19:24 crc kubenswrapper[4822]: I0224 09:19:24.969228 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-rb8gv" podStartSLOduration=1.991336583 podStartE2EDuration="5.969202315s" podCreationTimestamp="2026-02-24 09:19:19 +0000 UTC" firstStartedPulling="2026-02-24 09:19:20.277631615 +0000 UTC m=+682.665394163" lastFinishedPulling="2026-02-24 09:19:24.255497297 +0000 UTC m=+686.643259895" observedRunningTime="2026-02-24 09:19:24.968524287 +0000 UTC m=+687.356286885" watchObservedRunningTime="2026-02-24 09:19:24.969202315 +0000 UTC m=+687.356964893" Feb 24 09:19:29 crc kubenswrapper[4822]: I0224 09:19:29.788026 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-669bp"] Feb 24 09:19:29 crc kubenswrapper[4822]: I0224 09:19:29.789253 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="ovn-controller" containerID="cri-o://8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f" gracePeriod=30 Feb 24 09:19:29 crc kubenswrapper[4822]: I0224 09:19:29.789287 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="nbdb" containerID="cri-o://df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821" gracePeriod=30 Feb 24 09:19:29 crc kubenswrapper[4822]: I0224 09:19:29.789441 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="northd" containerID="cri-o://a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a" gracePeriod=30 Feb 24 09:19:29 crc kubenswrapper[4822]: I0224 09:19:29.789562 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8" gracePeriod=30 Feb 24 09:19:29 crc kubenswrapper[4822]: I0224 09:19:29.789666 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="kube-rbac-proxy-node" containerID="cri-o://effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897" gracePeriod=30 Feb 24 09:19:29 crc kubenswrapper[4822]: I0224 09:19:29.789744 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="ovn-acl-logging" containerID="cri-o://539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226" gracePeriod=30 Feb 24 09:19:29 crc kubenswrapper[4822]: I0224 09:19:29.789906 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="sbdb" containerID="cri-o://3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2" gracePeriod=30 Feb 24 09:19:29 crc kubenswrapper[4822]: I0224 09:19:29.843521 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="ovnkube-controller" containerID="cri-o://0025bf71a189e0879ecb3d4f1e3305509bf5c27a75ade7f2aa2ede54783ee6ed" gracePeriod=30 Feb 24 09:19:29 crc kubenswrapper[4822]: I0224 09:19:29.844346 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-rb8gv" Feb 24 09:19:29 crc kubenswrapper[4822]: I0224 09:19:29.975744 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-669bp_72f416e6-5647-4b65-b06f-df73aca5e594/ovnkube-controller/3.log" Feb 24 09:19:29 crc kubenswrapper[4822]: I0224 09:19:29.978633 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-669bp_72f416e6-5647-4b65-b06f-df73aca5e594/ovn-acl-logging/0.log" Feb 24 09:19:29 crc kubenswrapper[4822]: I0224 09:19:29.979210 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-669bp_72f416e6-5647-4b65-b06f-df73aca5e594/ovn-controller/0.log" Feb 24 09:19:29 crc kubenswrapper[4822]: I0224 09:19:29.979595 4822 generic.go:334] "Generic (PLEG): container finished" podID="72f416e6-5647-4b65-b06f-df73aca5e594" containerID="717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8" exitCode=0 Feb 24 09:19:29 crc kubenswrapper[4822]: I0224 09:19:29.979628 4822 generic.go:334] "Generic (PLEG): container finished" podID="72f416e6-5647-4b65-b06f-df73aca5e594" containerID="effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897" exitCode=0 Feb 24 09:19:29 crc kubenswrapper[4822]: I0224 09:19:29.979635 4822 generic.go:334] "Generic (PLEG): container finished" podID="72f416e6-5647-4b65-b06f-df73aca5e594" containerID="539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226" exitCode=143 Feb 24 09:19:29 crc kubenswrapper[4822]: I0224 09:19:29.979643 4822 generic.go:334] "Generic (PLEG): container finished" podID="72f416e6-5647-4b65-b06f-df73aca5e594" containerID="8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f" exitCode=143 Feb 24 09:19:29 crc kubenswrapper[4822]: I0224 09:19:29.979666 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" event={"ID":"72f416e6-5647-4b65-b06f-df73aca5e594","Type":"ContainerDied","Data":"717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8"} Feb 24 09:19:29 crc kubenswrapper[4822]: I0224 09:19:29.979717 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" event={"ID":"72f416e6-5647-4b65-b06f-df73aca5e594","Type":"ContainerDied","Data":"effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897"} Feb 24 09:19:29 crc kubenswrapper[4822]: I0224 09:19:29.979731 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" event={"ID":"72f416e6-5647-4b65-b06f-df73aca5e594","Type":"ContainerDied","Data":"539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226"} Feb 24 09:19:29 crc kubenswrapper[4822]: I0224 09:19:29.979742 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" event={"ID":"72f416e6-5647-4b65-b06f-df73aca5e594","Type":"ContainerDied","Data":"8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f"} Feb 24 09:19:29 crc kubenswrapper[4822]: I0224 09:19:29.981588 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqrzq_90b654a4-010b-4a5e-b2d8-d42764fcb628/kube-multus/2.log" Feb 24 09:19:29 crc kubenswrapper[4822]: I0224 09:19:29.982025 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqrzq_90b654a4-010b-4a5e-b2d8-d42764fcb628/kube-multus/1.log" Feb 24 09:19:29 crc kubenswrapper[4822]: I0224 09:19:29.982056 4822 generic.go:334] "Generic (PLEG): container finished" podID="90b654a4-010b-4a5e-b2d8-d42764fcb628" containerID="e7027b0c5af7dc9663a7699b0b9ac4baf2f15c12a3c15d5cdb17b4a746845841" exitCode=2 Feb 24 09:19:29 crc kubenswrapper[4822]: I0224 09:19:29.982079 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lqrzq" event={"ID":"90b654a4-010b-4a5e-b2d8-d42764fcb628","Type":"ContainerDied","Data":"e7027b0c5af7dc9663a7699b0b9ac4baf2f15c12a3c15d5cdb17b4a746845841"} Feb 24 09:19:29 crc kubenswrapper[4822]: I0224 09:19:29.982110 4822 scope.go:117] "RemoveContainer" containerID="81db2617c595292860af147428f1fdcece1db664671dfa8b9d1ad69cf77251a2" Feb 24 09:19:29 crc kubenswrapper[4822]: I0224 09:19:29.982793 4822 scope.go:117] "RemoveContainer" containerID="e7027b0c5af7dc9663a7699b0b9ac4baf2f15c12a3c15d5cdb17b4a746845841" Feb 24 09:19:29 crc kubenswrapper[4822]: E0224 09:19:29.983194 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-lqrzq_openshift-multus(90b654a4-010b-4a5e-b2d8-d42764fcb628)\"" pod="openshift-multus/multus-lqrzq" podUID="90b654a4-010b-4a5e-b2d8-d42764fcb628" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.143213 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-669bp_72f416e6-5647-4b65-b06f-df73aca5e594/ovnkube-controller/3.log" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.146069 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-669bp_72f416e6-5647-4b65-b06f-df73aca5e594/ovn-acl-logging/0.log" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.146722 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-669bp_72f416e6-5647-4b65-b06f-df73aca5e594/ovn-controller/0.log" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.147461 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.214177 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2gjqj"] Feb 24 09:19:30 crc kubenswrapper[4822]: E0224 09:19:30.214410 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="ovnkube-controller" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.214424 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="ovnkube-controller" Feb 24 09:19:30 crc kubenswrapper[4822]: E0224 09:19:30.214437 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="kubecfg-setup" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.214446 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="kubecfg-setup" Feb 24 09:19:30 crc kubenswrapper[4822]: E0224 09:19:30.214456 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="ovnkube-controller" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.214465 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="ovnkube-controller" Feb 24 09:19:30 crc kubenswrapper[4822]: E0224 09:19:30.214479 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="ovnkube-controller" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.214487 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="ovnkube-controller" Feb 24 09:19:30 crc kubenswrapper[4822]: E0224 09:19:30.214500 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="kube-rbac-proxy-ovn-metrics" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.214508 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="kube-rbac-proxy-ovn-metrics" Feb 24 09:19:30 crc kubenswrapper[4822]: E0224 09:19:30.214521 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="nbdb" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.214529 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="nbdb" Feb 24 09:19:30 crc kubenswrapper[4822]: E0224 09:19:30.214538 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="ovnkube-controller" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.214546 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="ovnkube-controller" Feb 24 09:19:30 crc kubenswrapper[4822]: E0224 09:19:30.214558 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="ovn-acl-logging" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.214566 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="ovn-acl-logging" Feb 24 09:19:30 crc kubenswrapper[4822]: E0224 09:19:30.214578 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="ovn-controller" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.214586 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="ovn-controller" Feb 24 09:19:30 crc kubenswrapper[4822]: E0224 09:19:30.214596 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="sbdb" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.214604 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="sbdb" Feb 24 09:19:30 crc kubenswrapper[4822]: E0224 09:19:30.214618 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="kube-rbac-proxy-node" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.214628 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="kube-rbac-proxy-node" Feb 24 09:19:30 crc kubenswrapper[4822]: E0224 09:19:30.214646 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="northd" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.214655 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="northd" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.214762 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="kube-rbac-proxy-node" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.214775 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="ovnkube-controller" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.214786 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="ovnkube-controller" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.214797 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="ovn-controller" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.214807 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="sbdb" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.214816 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="ovnkube-controller" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.214829 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="ovnkube-controller" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.214855 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="ovn-acl-logging" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.214865 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="nbdb" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.214875 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="ovnkube-controller" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.214886 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="kube-rbac-proxy-ovn-metrics" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.214898 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="northd" Feb 24 09:19:30 crc kubenswrapper[4822]: E0224 09:19:30.215048 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="ovnkube-controller" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.215069 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" containerName="ovnkube-controller" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.217077 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.257576 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-log-socket\") pod \"72f416e6-5647-4b65-b06f-df73aca5e594\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.257887 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-cni-netd\") pod \"72f416e6-5647-4b65-b06f-df73aca5e594\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.257668 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-log-socket" (OuterVolumeSpecName: "log-socket") pod "72f416e6-5647-4b65-b06f-df73aca5e594" (UID: "72f416e6-5647-4b65-b06f-df73aca5e594"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.257941 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-run-systemd\") pod \"72f416e6-5647-4b65-b06f-df73aca5e594\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.258039 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/72f416e6-5647-4b65-b06f-df73aca5e594-ovnkube-script-lib\") pod \"72f416e6-5647-4b65-b06f-df73aca5e594\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.258079 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/72f416e6-5647-4b65-b06f-df73aca5e594-env-overrides\") pod \"72f416e6-5647-4b65-b06f-df73aca5e594\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.258115 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-kubelet\") pod \"72f416e6-5647-4b65-b06f-df73aca5e594\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.258171 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-slash\") pod \"72f416e6-5647-4b65-b06f-df73aca5e594\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.258209 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/72f416e6-5647-4b65-b06f-df73aca5e594-ovnkube-config\") pod \"72f416e6-5647-4b65-b06f-df73aca5e594\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.258234 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-etc-openvswitch\") pod \"72f416e6-5647-4b65-b06f-df73aca5e594\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.258284 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-run-netns\") pod \"72f416e6-5647-4b65-b06f-df73aca5e594\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.258318 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-run-openvswitch\") pod \"72f416e6-5647-4b65-b06f-df73aca5e594\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.258348 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-cni-bin\") pod \"72f416e6-5647-4b65-b06f-df73aca5e594\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.258375 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-node-log\") pod \"72f416e6-5647-4b65-b06f-df73aca5e594\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.258417 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cg8qv\" (UniqueName: \"kubernetes.io/projected/72f416e6-5647-4b65-b06f-df73aca5e594-kube-api-access-cg8qv\") pod \"72f416e6-5647-4b65-b06f-df73aca5e594\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.258461 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/72f416e6-5647-4b65-b06f-df73aca5e594-ovn-node-metrics-cert\") pod \"72f416e6-5647-4b65-b06f-df73aca5e594\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.258514 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-run-ovn-kubernetes\") pod \"72f416e6-5647-4b65-b06f-df73aca5e594\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.258545 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-run-ovn\") pod \"72f416e6-5647-4b65-b06f-df73aca5e594\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.258578 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-var-lib-openvswitch\") pod \"72f416e6-5647-4b65-b06f-df73aca5e594\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.258607 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-systemd-units\") pod \"72f416e6-5647-4b65-b06f-df73aca5e594\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.258635 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-var-lib-cni-networks-ovn-kubernetes\") pod \"72f416e6-5647-4b65-b06f-df73aca5e594\" (UID: \"72f416e6-5647-4b65-b06f-df73aca5e594\") " Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.258765 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "72f416e6-5647-4b65-b06f-df73aca5e594" (UID: "72f416e6-5647-4b65-b06f-df73aca5e594"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.258866 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "72f416e6-5647-4b65-b06f-df73aca5e594" (UID: "72f416e6-5647-4b65-b06f-df73aca5e594"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.259246 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "72f416e6-5647-4b65-b06f-df73aca5e594" (UID: "72f416e6-5647-4b65-b06f-df73aca5e594"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.259344 4822 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.259387 4822 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-log-socket\") on node \"crc\" DevicePath \"\"" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.259414 4822 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.259427 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72f416e6-5647-4b65-b06f-df73aca5e594-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "72f416e6-5647-4b65-b06f-df73aca5e594" (UID: "72f416e6-5647-4b65-b06f-df73aca5e594"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.259479 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "72f416e6-5647-4b65-b06f-df73aca5e594" (UID: "72f416e6-5647-4b65-b06f-df73aca5e594"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.259482 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "72f416e6-5647-4b65-b06f-df73aca5e594" (UID: "72f416e6-5647-4b65-b06f-df73aca5e594"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.259540 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-node-log" (OuterVolumeSpecName: "node-log") pod "72f416e6-5647-4b65-b06f-df73aca5e594" (UID: "72f416e6-5647-4b65-b06f-df73aca5e594"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.259639 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72f416e6-5647-4b65-b06f-df73aca5e594-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "72f416e6-5647-4b65-b06f-df73aca5e594" (UID: "72f416e6-5647-4b65-b06f-df73aca5e594"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.259683 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "72f416e6-5647-4b65-b06f-df73aca5e594" (UID: "72f416e6-5647-4b65-b06f-df73aca5e594"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.259698 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "72f416e6-5647-4b65-b06f-df73aca5e594" (UID: "72f416e6-5647-4b65-b06f-df73aca5e594"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.259727 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "72f416e6-5647-4b65-b06f-df73aca5e594" (UID: "72f416e6-5647-4b65-b06f-df73aca5e594"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.259734 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-slash" (OuterVolumeSpecName: "host-slash") pod "72f416e6-5647-4b65-b06f-df73aca5e594" (UID: "72f416e6-5647-4b65-b06f-df73aca5e594"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.259761 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "72f416e6-5647-4b65-b06f-df73aca5e594" (UID: "72f416e6-5647-4b65-b06f-df73aca5e594"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.259764 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "72f416e6-5647-4b65-b06f-df73aca5e594" (UID: "72f416e6-5647-4b65-b06f-df73aca5e594"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.259789 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "72f416e6-5647-4b65-b06f-df73aca5e594" (UID: "72f416e6-5647-4b65-b06f-df73aca5e594"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.260280 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72f416e6-5647-4b65-b06f-df73aca5e594-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "72f416e6-5647-4b65-b06f-df73aca5e594" (UID: "72f416e6-5647-4b65-b06f-df73aca5e594"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.265528 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72f416e6-5647-4b65-b06f-df73aca5e594-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "72f416e6-5647-4b65-b06f-df73aca5e594" (UID: "72f416e6-5647-4b65-b06f-df73aca5e594"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.269710 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72f416e6-5647-4b65-b06f-df73aca5e594-kube-api-access-cg8qv" (OuterVolumeSpecName: "kube-api-access-cg8qv") pod "72f416e6-5647-4b65-b06f-df73aca5e594" (UID: "72f416e6-5647-4b65-b06f-df73aca5e594"). InnerVolumeSpecName "kube-api-access-cg8qv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.283824 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "72f416e6-5647-4b65-b06f-df73aca5e594" (UID: "72f416e6-5647-4b65-b06f-df73aca5e594"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.360863 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-run-openvswitch\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.360950 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-host-run-netns\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.361011 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/058c6774-0374-41d6-aa70-619e9c6373f4-ovnkube-config\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.361059 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-var-lib-openvswitch\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.361089 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/058c6774-0374-41d6-aa70-619e9c6373f4-env-overrides\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.361125 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/058c6774-0374-41d6-aa70-619e9c6373f4-ovn-node-metrics-cert\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.361287 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-host-cni-bin\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.361420 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-node-log\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.361463 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-host-cni-netd\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.361534 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-host-run-ovn-kubernetes\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.361617 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-run-systemd\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.361702 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/058c6774-0374-41d6-aa70-619e9c6373f4-ovnkube-script-lib\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.361763 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.361796 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhn8j\" (UniqueName: \"kubernetes.io/projected/058c6774-0374-41d6-aa70-619e9c6373f4-kube-api-access-dhn8j\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.361825 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-host-kubelet\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.361863 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-log-socket\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.361891 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-etc-openvswitch\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.361954 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-host-slash\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.362023 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-systemd-units\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.362086 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-run-ovn\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.362163 4822 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/72f416e6-5647-4b65-b06f-df73aca5e594-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.362184 4822 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.362202 4822 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.362219 4822 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.362236 4822 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.362253 4822 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.362269 4822 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/72f416e6-5647-4b65-b06f-df73aca5e594-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.362287 4822 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/72f416e6-5647-4b65-b06f-df73aca5e594-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.362304 4822 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.362320 4822 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-slash\") on node \"crc\" DevicePath \"\"" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.362338 4822 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/72f416e6-5647-4b65-b06f-df73aca5e594-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.362354 4822 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.362371 4822 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.362388 4822 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.362404 4822 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.362421 4822 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/72f416e6-5647-4b65-b06f-df73aca5e594-node-log\") on node \"crc\" DevicePath \"\"" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.362437 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cg8qv\" (UniqueName: \"kubernetes.io/projected/72f416e6-5647-4b65-b06f-df73aca5e594-kube-api-access-cg8qv\") on node \"crc\" DevicePath \"\"" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.464050 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/058c6774-0374-41d6-aa70-619e9c6373f4-ovnkube-script-lib\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.464173 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.464233 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhn8j\" (UniqueName: \"kubernetes.io/projected/058c6774-0374-41d6-aa70-619e9c6373f4-kube-api-access-dhn8j\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.464268 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-host-kubelet\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.464307 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-log-socket\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.464336 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-etc-openvswitch\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.464368 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-host-slash\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.464430 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-systemd-units\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.464506 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-run-ovn\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.464507 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-log-socket\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.464547 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-run-openvswitch\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.464529 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-host-kubelet\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.464577 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-systemd-units\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.464601 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-host-run-netns\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.464668 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.464710 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-host-run-netns\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.464765 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/058c6774-0374-41d6-aa70-619e9c6373f4-ovnkube-config\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.464674 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-host-slash\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.465003 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-run-ovn\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.465035 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-run-openvswitch\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.465594 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-etc-openvswitch\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.465862 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-var-lib-openvswitch\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.465905 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/058c6774-0374-41d6-aa70-619e9c6373f4-env-overrides\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.466221 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/058c6774-0374-41d6-aa70-619e9c6373f4-ovn-node-metrics-cert\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.466313 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-host-cni-bin\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.466375 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-node-log\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.466390 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-var-lib-openvswitch\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.466451 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-host-cni-netd\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.466414 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-host-cni-netd\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.466498 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-host-cni-bin\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.466533 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-host-run-ovn-kubernetes\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.466589 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-run-systemd\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.466784 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-run-systemd\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.466872 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-host-run-ovn-kubernetes\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.466951 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/058c6774-0374-41d6-aa70-619e9c6373f4-node-log\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.468512 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/058c6774-0374-41d6-aa70-619e9c6373f4-env-overrides\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.469229 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/058c6774-0374-41d6-aa70-619e9c6373f4-ovnkube-config\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.469329 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/058c6774-0374-41d6-aa70-619e9c6373f4-ovnkube-script-lib\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.473638 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/058c6774-0374-41d6-aa70-619e9c6373f4-ovn-node-metrics-cert\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.482174 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhn8j\" (UniqueName: \"kubernetes.io/projected/058c6774-0374-41d6-aa70-619e9c6373f4-kube-api-access-dhn8j\") pod \"ovnkube-node-2gjqj\" (UID: \"058c6774-0374-41d6-aa70-619e9c6373f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.538640 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.992456 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-669bp_72f416e6-5647-4b65-b06f-df73aca5e594/ovnkube-controller/3.log" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.996419 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-669bp_72f416e6-5647-4b65-b06f-df73aca5e594/ovn-acl-logging/0.log" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.997436 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-669bp_72f416e6-5647-4b65-b06f-df73aca5e594/ovn-controller/0.log" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.998237 4822 generic.go:334] "Generic (PLEG): container finished" podID="72f416e6-5647-4b65-b06f-df73aca5e594" containerID="0025bf71a189e0879ecb3d4f1e3305509bf5c27a75ade7f2aa2ede54783ee6ed" exitCode=0 Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.998300 4822 generic.go:334] "Generic (PLEG): container finished" podID="72f416e6-5647-4b65-b06f-df73aca5e594" containerID="3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2" exitCode=0 Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.998306 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" event={"ID":"72f416e6-5647-4b65-b06f-df73aca5e594","Type":"ContainerDied","Data":"0025bf71a189e0879ecb3d4f1e3305509bf5c27a75ade7f2aa2ede54783ee6ed"} Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.998405 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.998426 4822 scope.go:117] "RemoveContainer" containerID="0025bf71a189e0879ecb3d4f1e3305509bf5c27a75ade7f2aa2ede54783ee6ed" Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.998324 4822 generic.go:334] "Generic (PLEG): container finished" podID="72f416e6-5647-4b65-b06f-df73aca5e594" containerID="df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821" exitCode=0 Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.998527 4822 generic.go:334] "Generic (PLEG): container finished" podID="72f416e6-5647-4b65-b06f-df73aca5e594" containerID="a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a" exitCode=0 Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.998407 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" event={"ID":"72f416e6-5647-4b65-b06f-df73aca5e594","Type":"ContainerDied","Data":"3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2"} Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.998746 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" event={"ID":"72f416e6-5647-4b65-b06f-df73aca5e594","Type":"ContainerDied","Data":"df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821"} Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.998784 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" event={"ID":"72f416e6-5647-4b65-b06f-df73aca5e594","Type":"ContainerDied","Data":"a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a"} Feb 24 09:19:30 crc kubenswrapper[4822]: I0224 09:19:30.998804 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-669bp" event={"ID":"72f416e6-5647-4b65-b06f-df73aca5e594","Type":"ContainerDied","Data":"2559c6d343beb3d63a0ce41b808722d1a90cd58cb64711e91fb36365c5d898b9"} Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.000886 4822 generic.go:334] "Generic (PLEG): container finished" podID="058c6774-0374-41d6-aa70-619e9c6373f4" containerID="1a2e832b88bad58bed499438074a6e6621435bcbc17ccafebe89c9bc83064713" exitCode=0 Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.001004 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" event={"ID":"058c6774-0374-41d6-aa70-619e9c6373f4","Type":"ContainerDied","Data":"1a2e832b88bad58bed499438074a6e6621435bcbc17ccafebe89c9bc83064713"} Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.001072 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" event={"ID":"058c6774-0374-41d6-aa70-619e9c6373f4","Type":"ContainerStarted","Data":"0986cfe310b26ec3960a0d53dd262e6a3ebab28c9b6e2ab60a65f30de8ac9bc5"} Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.006098 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqrzq_90b654a4-010b-4a5e-b2d8-d42764fcb628/kube-multus/2.log" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.041039 4822 scope.go:117] "RemoveContainer" containerID="43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.097499 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-669bp"] Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.103306 4822 scope.go:117] "RemoveContainer" containerID="3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.104263 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-669bp"] Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.149759 4822 scope.go:117] "RemoveContainer" containerID="df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.168102 4822 scope.go:117] "RemoveContainer" containerID="a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.192550 4822 scope.go:117] "RemoveContainer" containerID="717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.224933 4822 scope.go:117] "RemoveContainer" containerID="effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.266360 4822 scope.go:117] "RemoveContainer" containerID="539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.289537 4822 scope.go:117] "RemoveContainer" containerID="8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.309133 4822 scope.go:117] "RemoveContainer" containerID="ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.329316 4822 scope.go:117] "RemoveContainer" containerID="0025bf71a189e0879ecb3d4f1e3305509bf5c27a75ade7f2aa2ede54783ee6ed" Feb 24 09:19:31 crc kubenswrapper[4822]: E0224 09:19:31.331090 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0025bf71a189e0879ecb3d4f1e3305509bf5c27a75ade7f2aa2ede54783ee6ed\": container with ID starting with 0025bf71a189e0879ecb3d4f1e3305509bf5c27a75ade7f2aa2ede54783ee6ed not found: ID does not exist" containerID="0025bf71a189e0879ecb3d4f1e3305509bf5c27a75ade7f2aa2ede54783ee6ed" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.331160 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0025bf71a189e0879ecb3d4f1e3305509bf5c27a75ade7f2aa2ede54783ee6ed"} err="failed to get container status \"0025bf71a189e0879ecb3d4f1e3305509bf5c27a75ade7f2aa2ede54783ee6ed\": rpc error: code = NotFound desc = could not find container \"0025bf71a189e0879ecb3d4f1e3305509bf5c27a75ade7f2aa2ede54783ee6ed\": container with ID starting with 0025bf71a189e0879ecb3d4f1e3305509bf5c27a75ade7f2aa2ede54783ee6ed not found: ID does not exist" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.331271 4822 scope.go:117] "RemoveContainer" containerID="43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6" Feb 24 09:19:31 crc kubenswrapper[4822]: E0224 09:19:31.334528 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6\": container with ID starting with 43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6 not found: ID does not exist" containerID="43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.334575 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6"} err="failed to get container status \"43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6\": rpc error: code = NotFound desc = could not find container \"43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6\": container with ID starting with 43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6 not found: ID does not exist" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.334605 4822 scope.go:117] "RemoveContainer" containerID="3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2" Feb 24 09:19:31 crc kubenswrapper[4822]: E0224 09:19:31.334989 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2\": container with ID starting with 3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2 not found: ID does not exist" containerID="3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.335023 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2"} err="failed to get container status \"3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2\": rpc error: code = NotFound desc = could not find container \"3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2\": container with ID starting with 3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2 not found: ID does not exist" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.335040 4822 scope.go:117] "RemoveContainer" containerID="df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821" Feb 24 09:19:31 crc kubenswrapper[4822]: E0224 09:19:31.335382 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821\": container with ID starting with df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821 not found: ID does not exist" containerID="df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.335405 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821"} err="failed to get container status \"df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821\": rpc error: code = NotFound desc = could not find container \"df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821\": container with ID starting with df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821 not found: ID does not exist" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.335418 4822 scope.go:117] "RemoveContainer" containerID="a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a" Feb 24 09:19:31 crc kubenswrapper[4822]: E0224 09:19:31.335800 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a\": container with ID starting with a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a not found: ID does not exist" containerID="a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.335836 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a"} err="failed to get container status \"a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a\": rpc error: code = NotFound desc = could not find container \"a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a\": container with ID starting with a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a not found: ID does not exist" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.335896 4822 scope.go:117] "RemoveContainer" containerID="717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8" Feb 24 09:19:31 crc kubenswrapper[4822]: E0224 09:19:31.336267 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8\": container with ID starting with 717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8 not found: ID does not exist" containerID="717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.336298 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8"} err="failed to get container status \"717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8\": rpc error: code = NotFound desc = could not find container \"717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8\": container with ID starting with 717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8 not found: ID does not exist" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.336314 4822 scope.go:117] "RemoveContainer" containerID="effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897" Feb 24 09:19:31 crc kubenswrapper[4822]: E0224 09:19:31.336571 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897\": container with ID starting with effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897 not found: ID does not exist" containerID="effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.336600 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897"} err="failed to get container status \"effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897\": rpc error: code = NotFound desc = could not find container \"effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897\": container with ID starting with effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897 not found: ID does not exist" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.336617 4822 scope.go:117] "RemoveContainer" containerID="539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226" Feb 24 09:19:31 crc kubenswrapper[4822]: E0224 09:19:31.336893 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226\": container with ID starting with 539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226 not found: ID does not exist" containerID="539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.336932 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226"} err="failed to get container status \"539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226\": rpc error: code = NotFound desc = could not find container \"539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226\": container with ID starting with 539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226 not found: ID does not exist" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.336950 4822 scope.go:117] "RemoveContainer" containerID="8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f" Feb 24 09:19:31 crc kubenswrapper[4822]: E0224 09:19:31.337237 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f\": container with ID starting with 8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f not found: ID does not exist" containerID="8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.337257 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f"} err="failed to get container status \"8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f\": rpc error: code = NotFound desc = could not find container \"8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f\": container with ID starting with 8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f not found: ID does not exist" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.337274 4822 scope.go:117] "RemoveContainer" containerID="ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4" Feb 24 09:19:31 crc kubenswrapper[4822]: E0224 09:19:31.338117 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\": container with ID starting with ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4 not found: ID does not exist" containerID="ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.338181 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4"} err="failed to get container status \"ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\": rpc error: code = NotFound desc = could not find container \"ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\": container with ID starting with ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4 not found: ID does not exist" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.338208 4822 scope.go:117] "RemoveContainer" containerID="0025bf71a189e0879ecb3d4f1e3305509bf5c27a75ade7f2aa2ede54783ee6ed" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.338672 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0025bf71a189e0879ecb3d4f1e3305509bf5c27a75ade7f2aa2ede54783ee6ed"} err="failed to get container status \"0025bf71a189e0879ecb3d4f1e3305509bf5c27a75ade7f2aa2ede54783ee6ed\": rpc error: code = NotFound desc = could not find container \"0025bf71a189e0879ecb3d4f1e3305509bf5c27a75ade7f2aa2ede54783ee6ed\": container with ID starting with 0025bf71a189e0879ecb3d4f1e3305509bf5c27a75ade7f2aa2ede54783ee6ed not found: ID does not exist" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.338696 4822 scope.go:117] "RemoveContainer" containerID="43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.338905 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6"} err="failed to get container status \"43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6\": rpc error: code = NotFound desc = could not find container \"43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6\": container with ID starting with 43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6 not found: ID does not exist" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.338945 4822 scope.go:117] "RemoveContainer" containerID="3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.339295 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2"} err="failed to get container status \"3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2\": rpc error: code = NotFound desc = could not find container \"3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2\": container with ID starting with 3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2 not found: ID does not exist" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.339327 4822 scope.go:117] "RemoveContainer" containerID="df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.339595 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821"} err="failed to get container status \"df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821\": rpc error: code = NotFound desc = could not find container \"df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821\": container with ID starting with df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821 not found: ID does not exist" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.339618 4822 scope.go:117] "RemoveContainer" containerID="a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.339973 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a"} err="failed to get container status \"a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a\": rpc error: code = NotFound desc = could not find container \"a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a\": container with ID starting with a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a not found: ID does not exist" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.340002 4822 scope.go:117] "RemoveContainer" containerID="717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.340729 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8"} err="failed to get container status \"717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8\": rpc error: code = NotFound desc = could not find container \"717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8\": container with ID starting with 717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8 not found: ID does not exist" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.340750 4822 scope.go:117] "RemoveContainer" containerID="effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.341190 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897"} err="failed to get container status \"effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897\": rpc error: code = NotFound desc = could not find container \"effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897\": container with ID starting with effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897 not found: ID does not exist" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.341220 4822 scope.go:117] "RemoveContainer" containerID="539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.341884 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226"} err="failed to get container status \"539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226\": rpc error: code = NotFound desc = could not find container \"539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226\": container with ID starting with 539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226 not found: ID does not exist" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.341930 4822 scope.go:117] "RemoveContainer" containerID="8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.342297 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f"} err="failed to get container status \"8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f\": rpc error: code = NotFound desc = could not find container \"8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f\": container with ID starting with 8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f not found: ID does not exist" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.342328 4822 scope.go:117] "RemoveContainer" containerID="ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.346276 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4"} err="failed to get container status \"ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\": rpc error: code = NotFound desc = could not find container \"ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\": container with ID starting with ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4 not found: ID does not exist" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.346299 4822 scope.go:117] "RemoveContainer" containerID="0025bf71a189e0879ecb3d4f1e3305509bf5c27a75ade7f2aa2ede54783ee6ed" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.346648 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0025bf71a189e0879ecb3d4f1e3305509bf5c27a75ade7f2aa2ede54783ee6ed"} err="failed to get container status \"0025bf71a189e0879ecb3d4f1e3305509bf5c27a75ade7f2aa2ede54783ee6ed\": rpc error: code = NotFound desc = could not find container \"0025bf71a189e0879ecb3d4f1e3305509bf5c27a75ade7f2aa2ede54783ee6ed\": container with ID starting with 0025bf71a189e0879ecb3d4f1e3305509bf5c27a75ade7f2aa2ede54783ee6ed not found: ID does not exist" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.346688 4822 scope.go:117] "RemoveContainer" containerID="43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.347156 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6"} err="failed to get container status \"43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6\": rpc error: code = NotFound desc = could not find container \"43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6\": container with ID starting with 43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6 not found: ID does not exist" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.347172 4822 scope.go:117] "RemoveContainer" containerID="3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.347461 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2"} err="failed to get container status \"3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2\": rpc error: code = NotFound desc = could not find container \"3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2\": container with ID starting with 3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2 not found: ID does not exist" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.347475 4822 scope.go:117] "RemoveContainer" containerID="df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.347899 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821"} err="failed to get container status \"df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821\": rpc error: code = NotFound desc = could not find container \"df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821\": container with ID starting with df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821 not found: ID does not exist" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.347938 4822 scope.go:117] "RemoveContainer" containerID="a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.348254 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a"} err="failed to get container status \"a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a\": rpc error: code = NotFound desc = could not find container \"a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a\": container with ID starting with a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a not found: ID does not exist" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.348274 4822 scope.go:117] "RemoveContainer" containerID="717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.348612 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8"} err="failed to get container status \"717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8\": rpc error: code = NotFound desc = could not find container \"717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8\": container with ID starting with 717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8 not found: ID does not exist" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.348663 4822 scope.go:117] "RemoveContainer" containerID="effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.349145 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897"} err="failed to get container status \"effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897\": rpc error: code = NotFound desc = could not find container \"effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897\": container with ID starting with effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897 not found: ID does not exist" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.349171 4822 scope.go:117] "RemoveContainer" containerID="539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.349948 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226"} err="failed to get container status \"539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226\": rpc error: code = NotFound desc = could not find container \"539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226\": container with ID starting with 539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226 not found: ID does not exist" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.349989 4822 scope.go:117] "RemoveContainer" containerID="8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.350246 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f"} err="failed to get container status \"8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f\": rpc error: code = NotFound desc = could not find container \"8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f\": container with ID starting with 8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f not found: ID does not exist" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.350281 4822 scope.go:117] "RemoveContainer" containerID="ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.350562 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4"} err="failed to get container status \"ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\": rpc error: code = NotFound desc = could not find container \"ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\": container with ID starting with ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4 not found: ID does not exist" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.350596 4822 scope.go:117] "RemoveContainer" containerID="0025bf71a189e0879ecb3d4f1e3305509bf5c27a75ade7f2aa2ede54783ee6ed" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.350875 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0025bf71a189e0879ecb3d4f1e3305509bf5c27a75ade7f2aa2ede54783ee6ed"} err="failed to get container status \"0025bf71a189e0879ecb3d4f1e3305509bf5c27a75ade7f2aa2ede54783ee6ed\": rpc error: code = NotFound desc = could not find container \"0025bf71a189e0879ecb3d4f1e3305509bf5c27a75ade7f2aa2ede54783ee6ed\": container with ID starting with 0025bf71a189e0879ecb3d4f1e3305509bf5c27a75ade7f2aa2ede54783ee6ed not found: ID does not exist" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.350906 4822 scope.go:117] "RemoveContainer" containerID="43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.351214 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6"} err="failed to get container status \"43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6\": rpc error: code = NotFound desc = could not find container \"43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6\": container with ID starting with 43a874f7b7c39d3e4af0a11cfef6a87096adbaa6377ec97d0a70106fcf5772a6 not found: ID does not exist" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.351275 4822 scope.go:117] "RemoveContainer" containerID="3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.352711 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2"} err="failed to get container status \"3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2\": rpc error: code = NotFound desc = could not find container \"3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2\": container with ID starting with 3bc9bb06e76f7a9cefd98063e40005def28ca40981d13fa2818a736eb0bfcae2 not found: ID does not exist" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.352838 4822 scope.go:117] "RemoveContainer" containerID="df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.353493 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821"} err="failed to get container status \"df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821\": rpc error: code = NotFound desc = could not find container \"df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821\": container with ID starting with df096ff4aa57f0d89c8e770ffe66eee8d4942721d3178cb4f26a65c72372b821 not found: ID does not exist" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.353569 4822 scope.go:117] "RemoveContainer" containerID="a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.354864 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a"} err="failed to get container status \"a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a\": rpc error: code = NotFound desc = could not find container \"a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a\": container with ID starting with a4f5ddae468a947eb17b4aee924f81710dc9a29d51478d751fb022cc3a005e5a not found: ID does not exist" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.354899 4822 scope.go:117] "RemoveContainer" containerID="717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.356339 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8"} err="failed to get container status \"717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8\": rpc error: code = NotFound desc = could not find container \"717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8\": container with ID starting with 717893fc59bf9612ea2addb9139cbc484bf0b49dea6cee4ea2511b0d1a0502a8 not found: ID does not exist" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.356363 4822 scope.go:117] "RemoveContainer" containerID="effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.357171 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897"} err="failed to get container status \"effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897\": rpc error: code = NotFound desc = could not find container \"effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897\": container with ID starting with effee4e2194eb6b5354a4fc1198d41115460a2b8cc4550020f2f66c78e4ee897 not found: ID does not exist" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.357209 4822 scope.go:117] "RemoveContainer" containerID="539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.357646 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226"} err="failed to get container status \"539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226\": rpc error: code = NotFound desc = could not find container \"539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226\": container with ID starting with 539eaadd58736b75ad35ed3b224bd5f203f4dc3e61b55a98ba8eb9c388fad226 not found: ID does not exist" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.357678 4822 scope.go:117] "RemoveContainer" containerID="8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.358411 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f"} err="failed to get container status \"8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f\": rpc error: code = NotFound desc = could not find container \"8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f\": container with ID starting with 8364d02f491eed39af0c876a9f62fccf7a90916df0ac3678fe0fac3a4c09542f not found: ID does not exist" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.358453 4822 scope.go:117] "RemoveContainer" containerID="ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4" Feb 24 09:19:31 crc kubenswrapper[4822]: I0224 09:19:31.358739 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4"} err="failed to get container status \"ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\": rpc error: code = NotFound desc = could not find container \"ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4\": container with ID starting with ca10d00c26af090e9b6e6aeb917d31e020a168708c58455a25cf9e2e150aaaf4 not found: ID does not exist" Feb 24 09:19:32 crc kubenswrapper[4822]: I0224 09:19:32.018289 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" event={"ID":"058c6774-0374-41d6-aa70-619e9c6373f4","Type":"ContainerStarted","Data":"f2b549c1cc25ed7c22fc7797684c41cb36fae375deafa271119b681572434c6e"} Feb 24 09:19:32 crc kubenswrapper[4822]: I0224 09:19:32.018776 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" event={"ID":"058c6774-0374-41d6-aa70-619e9c6373f4","Type":"ContainerStarted","Data":"7bd22d0bf75ce3121e2cbebd1dce485d1aceee070593fcb9bba975a2d23c4bca"} Feb 24 09:19:32 crc kubenswrapper[4822]: I0224 09:19:32.018805 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" event={"ID":"058c6774-0374-41d6-aa70-619e9c6373f4","Type":"ContainerStarted","Data":"a80afff3c3fd695a55a287e342d6577ede4130c867078f707bb671e4c93003bc"} Feb 24 09:19:32 crc kubenswrapper[4822]: I0224 09:19:32.018825 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" event={"ID":"058c6774-0374-41d6-aa70-619e9c6373f4","Type":"ContainerStarted","Data":"43053faf71bb6093515cb1ba897d339f5a42f462e46716055bf698a4aed9dea2"} Feb 24 09:19:32 crc kubenswrapper[4822]: I0224 09:19:32.018842 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" event={"ID":"058c6774-0374-41d6-aa70-619e9c6373f4","Type":"ContainerStarted","Data":"a6f027e77715a79cef26366d59666606971101366adfcbafac9b3395745e2019"} Feb 24 09:19:32 crc kubenswrapper[4822]: I0224 09:19:32.018862 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" event={"ID":"058c6774-0374-41d6-aa70-619e9c6373f4","Type":"ContainerStarted","Data":"49abbcfb33eadb64afec8b9615e8413ca10839aff90e5b8c94680eab04014e53"} Feb 24 09:19:32 crc kubenswrapper[4822]: I0224 09:19:32.366686 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72f416e6-5647-4b65-b06f-df73aca5e594" path="/var/lib/kubelet/pods/72f416e6-5647-4b65-b06f-df73aca5e594/volumes" Feb 24 09:19:35 crc kubenswrapper[4822]: I0224 09:19:35.048176 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" event={"ID":"058c6774-0374-41d6-aa70-619e9c6373f4","Type":"ContainerStarted","Data":"4daec1c1dfb35dbbc3f79e84626b55f77df4f00ee5a33d798d9614f96b9bf09c"} Feb 24 09:19:37 crc kubenswrapper[4822]: I0224 09:19:37.067504 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" event={"ID":"058c6774-0374-41d6-aa70-619e9c6373f4","Type":"ContainerStarted","Data":"2dcaa9f964a026ead746bf0d97208f5860c65117a77c662fc36abeb5b6c20fa7"} Feb 24 09:19:37 crc kubenswrapper[4822]: I0224 09:19:37.067870 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:37 crc kubenswrapper[4822]: I0224 09:19:37.067906 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:37 crc kubenswrapper[4822]: I0224 09:19:37.128007 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:37 crc kubenswrapper[4822]: I0224 09:19:37.145867 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" podStartSLOduration=7.145848419 podStartE2EDuration="7.145848419s" podCreationTimestamp="2026-02-24 09:19:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:19:37.144349517 +0000 UTC m=+699.532112075" watchObservedRunningTime="2026-02-24 09:19:37.145848419 +0000 UTC m=+699.533610977" Feb 24 09:19:38 crc kubenswrapper[4822]: I0224 09:19:38.075255 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:38 crc kubenswrapper[4822]: I0224 09:19:38.118398 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:19:41 crc kubenswrapper[4822]: I0224 09:19:41.337446 4822 scope.go:117] "RemoveContainer" containerID="e7027b0c5af7dc9663a7699b0b9ac4baf2f15c12a3c15d5cdb17b4a746845841" Feb 24 09:19:41 crc kubenswrapper[4822]: E0224 09:19:41.337722 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-lqrzq_openshift-multus(90b654a4-010b-4a5e-b2d8-d42764fcb628)\"" pod="openshift-multus/multus-lqrzq" podUID="90b654a4-010b-4a5e-b2d8-d42764fcb628" Feb 24 09:19:54 crc kubenswrapper[4822]: I0224 09:19:54.338853 4822 scope.go:117] "RemoveContainer" containerID="e7027b0c5af7dc9663a7699b0b9ac4baf2f15c12a3c15d5cdb17b4a746845841" Feb 24 09:19:55 crc kubenswrapper[4822]: I0224 09:19:55.207533 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqrzq_90b654a4-010b-4a5e-b2d8-d42764fcb628/kube-multus/2.log" Feb 24 09:19:55 crc kubenswrapper[4822]: I0224 09:19:55.208099 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lqrzq" event={"ID":"90b654a4-010b-4a5e-b2d8-d42764fcb628","Type":"ContainerStarted","Data":"718d17ce5fe3c5f295f3cc2fb2b1dce8a129e5d34934e150c9802c0d6c03e210"} Feb 24 09:20:00 crc kubenswrapper[4822]: I0224 09:20:00.580887 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2gjqj" Feb 24 09:20:06 crc kubenswrapper[4822]: I0224 09:20:06.905592 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarxpt9"] Feb 24 09:20:06 crc kubenswrapper[4822]: I0224 09:20:06.906727 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarxpt9" Feb 24 09:20:06 crc kubenswrapper[4822]: I0224 09:20:06.910628 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 24 09:20:06 crc kubenswrapper[4822]: I0224 09:20:06.924212 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarxpt9"] Feb 24 09:20:06 crc kubenswrapper[4822]: I0224 09:20:06.989988 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e0c5851e-7f79-4a8e-b26c-4690e67ea80f-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarxpt9\" (UID: \"e0c5851e-7f79-4a8e-b26c-4690e67ea80f\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarxpt9" Feb 24 09:20:06 crc kubenswrapper[4822]: I0224 09:20:06.990093 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e0c5851e-7f79-4a8e-b26c-4690e67ea80f-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarxpt9\" (UID: \"e0c5851e-7f79-4a8e-b26c-4690e67ea80f\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarxpt9" Feb 24 09:20:06 crc kubenswrapper[4822]: I0224 09:20:06.990133 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5gzn\" (UniqueName: \"kubernetes.io/projected/e0c5851e-7f79-4a8e-b26c-4690e67ea80f-kube-api-access-d5gzn\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarxpt9\" (UID: \"e0c5851e-7f79-4a8e-b26c-4690e67ea80f\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarxpt9" Feb 24 09:20:07 crc kubenswrapper[4822]: I0224 09:20:07.091047 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e0c5851e-7f79-4a8e-b26c-4690e67ea80f-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarxpt9\" (UID: \"e0c5851e-7f79-4a8e-b26c-4690e67ea80f\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarxpt9" Feb 24 09:20:07 crc kubenswrapper[4822]: I0224 09:20:07.091139 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e0c5851e-7f79-4a8e-b26c-4690e67ea80f-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarxpt9\" (UID: \"e0c5851e-7f79-4a8e-b26c-4690e67ea80f\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarxpt9" Feb 24 09:20:07 crc kubenswrapper[4822]: I0224 09:20:07.091181 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5gzn\" (UniqueName: \"kubernetes.io/projected/e0c5851e-7f79-4a8e-b26c-4690e67ea80f-kube-api-access-d5gzn\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarxpt9\" (UID: \"e0c5851e-7f79-4a8e-b26c-4690e67ea80f\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarxpt9" Feb 24 09:20:07 crc kubenswrapper[4822]: I0224 09:20:07.091854 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e0c5851e-7f79-4a8e-b26c-4690e67ea80f-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarxpt9\" (UID: \"e0c5851e-7f79-4a8e-b26c-4690e67ea80f\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarxpt9" Feb 24 09:20:07 crc kubenswrapper[4822]: I0224 09:20:07.092041 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e0c5851e-7f79-4a8e-b26c-4690e67ea80f-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarxpt9\" (UID: \"e0c5851e-7f79-4a8e-b26c-4690e67ea80f\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarxpt9" Feb 24 09:20:07 crc kubenswrapper[4822]: I0224 09:20:07.125464 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5gzn\" (UniqueName: \"kubernetes.io/projected/e0c5851e-7f79-4a8e-b26c-4690e67ea80f-kube-api-access-d5gzn\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarxpt9\" (UID: \"e0c5851e-7f79-4a8e-b26c-4690e67ea80f\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarxpt9" Feb 24 09:20:07 crc kubenswrapper[4822]: I0224 09:20:07.230896 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarxpt9" Feb 24 09:20:07 crc kubenswrapper[4822]: I0224 09:20:07.573876 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarxpt9"] Feb 24 09:20:08 crc kubenswrapper[4822]: I0224 09:20:08.299031 4822 generic.go:334] "Generic (PLEG): container finished" podID="e0c5851e-7f79-4a8e-b26c-4690e67ea80f" containerID="94025e0ae07740b2a4c9ba3851d528cd224fc963f1d3cefea30c91e44691f886" exitCode=0 Feb 24 09:20:08 crc kubenswrapper[4822]: I0224 09:20:08.300336 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarxpt9" event={"ID":"e0c5851e-7f79-4a8e-b26c-4690e67ea80f","Type":"ContainerDied","Data":"94025e0ae07740b2a4c9ba3851d528cd224fc963f1d3cefea30c91e44691f886"} Feb 24 09:20:08 crc kubenswrapper[4822]: I0224 09:20:08.300402 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarxpt9" event={"ID":"e0c5851e-7f79-4a8e-b26c-4690e67ea80f","Type":"ContainerStarted","Data":"f35ee1baf876eff363c46c22922167f340a95190419b4ed9473579a87f92f2d8"} Feb 24 09:20:10 crc kubenswrapper[4822]: I0224 09:20:10.313891 4822 generic.go:334] "Generic (PLEG): container finished" podID="e0c5851e-7f79-4a8e-b26c-4690e67ea80f" containerID="d882d0f38b13a93fd29d9fede38ad87485a0dd897b66b04e9eee5e0210e04fcd" exitCode=0 Feb 24 09:20:10 crc kubenswrapper[4822]: I0224 09:20:10.313948 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarxpt9" event={"ID":"e0c5851e-7f79-4a8e-b26c-4690e67ea80f","Type":"ContainerDied","Data":"d882d0f38b13a93fd29d9fede38ad87485a0dd897b66b04e9eee5e0210e04fcd"} Feb 24 09:20:11 crc kubenswrapper[4822]: I0224 09:20:11.341776 4822 generic.go:334] "Generic (PLEG): container finished" podID="e0c5851e-7f79-4a8e-b26c-4690e67ea80f" containerID="c95ec0d99a14c9f6f108d59e7abfac1baa3141a34e3b44aec12f3e65516a5745" exitCode=0 Feb 24 09:20:11 crc kubenswrapper[4822]: I0224 09:20:11.341828 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarxpt9" event={"ID":"e0c5851e-7f79-4a8e-b26c-4690e67ea80f","Type":"ContainerDied","Data":"c95ec0d99a14c9f6f108d59e7abfac1baa3141a34e3b44aec12f3e65516a5745"} Feb 24 09:20:12 crc kubenswrapper[4822]: I0224 09:20:12.631826 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarxpt9" Feb 24 09:20:12 crc kubenswrapper[4822]: I0224 09:20:12.694951 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e0c5851e-7f79-4a8e-b26c-4690e67ea80f-bundle\") pod \"e0c5851e-7f79-4a8e-b26c-4690e67ea80f\" (UID: \"e0c5851e-7f79-4a8e-b26c-4690e67ea80f\") " Feb 24 09:20:12 crc kubenswrapper[4822]: I0224 09:20:12.695082 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5gzn\" (UniqueName: \"kubernetes.io/projected/e0c5851e-7f79-4a8e-b26c-4690e67ea80f-kube-api-access-d5gzn\") pod \"e0c5851e-7f79-4a8e-b26c-4690e67ea80f\" (UID: \"e0c5851e-7f79-4a8e-b26c-4690e67ea80f\") " Feb 24 09:20:12 crc kubenswrapper[4822]: I0224 09:20:12.695187 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e0c5851e-7f79-4a8e-b26c-4690e67ea80f-util\") pod \"e0c5851e-7f79-4a8e-b26c-4690e67ea80f\" (UID: \"e0c5851e-7f79-4a8e-b26c-4690e67ea80f\") " Feb 24 09:20:12 crc kubenswrapper[4822]: I0224 09:20:12.696109 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0c5851e-7f79-4a8e-b26c-4690e67ea80f-bundle" (OuterVolumeSpecName: "bundle") pod "e0c5851e-7f79-4a8e-b26c-4690e67ea80f" (UID: "e0c5851e-7f79-4a8e-b26c-4690e67ea80f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:20:12 crc kubenswrapper[4822]: I0224 09:20:12.707279 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0c5851e-7f79-4a8e-b26c-4690e67ea80f-kube-api-access-d5gzn" (OuterVolumeSpecName: "kube-api-access-d5gzn") pod "e0c5851e-7f79-4a8e-b26c-4690e67ea80f" (UID: "e0c5851e-7f79-4a8e-b26c-4690e67ea80f"). InnerVolumeSpecName "kube-api-access-d5gzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:20:12 crc kubenswrapper[4822]: I0224 09:20:12.715163 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0c5851e-7f79-4a8e-b26c-4690e67ea80f-util" (OuterVolumeSpecName: "util") pod "e0c5851e-7f79-4a8e-b26c-4690e67ea80f" (UID: "e0c5851e-7f79-4a8e-b26c-4690e67ea80f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:20:12 crc kubenswrapper[4822]: I0224 09:20:12.796503 4822 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e0c5851e-7f79-4a8e-b26c-4690e67ea80f-util\") on node \"crc\" DevicePath \"\"" Feb 24 09:20:12 crc kubenswrapper[4822]: I0224 09:20:12.796565 4822 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e0c5851e-7f79-4a8e-b26c-4690e67ea80f-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 09:20:12 crc kubenswrapper[4822]: I0224 09:20:12.796588 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5gzn\" (UniqueName: \"kubernetes.io/projected/e0c5851e-7f79-4a8e-b26c-4690e67ea80f-kube-api-access-d5gzn\") on node \"crc\" DevicePath \"\"" Feb 24 09:20:13 crc kubenswrapper[4822]: I0224 09:20:13.359336 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarxpt9" event={"ID":"e0c5851e-7f79-4a8e-b26c-4690e67ea80f","Type":"ContainerDied","Data":"f35ee1baf876eff363c46c22922167f340a95190419b4ed9473579a87f92f2d8"} Feb 24 09:20:13 crc kubenswrapper[4822]: I0224 09:20:13.359403 4822 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f35ee1baf876eff363c46c22922167f340a95190419b4ed9473579a87f92f2d8" Feb 24 09:20:13 crc kubenswrapper[4822]: I0224 09:20:13.359412 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarxpt9" Feb 24 09:20:15 crc kubenswrapper[4822]: I0224 09:20:15.600241 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-jfk5h"] Feb 24 09:20:15 crc kubenswrapper[4822]: E0224 09:20:15.600757 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0c5851e-7f79-4a8e-b26c-4690e67ea80f" containerName="pull" Feb 24 09:20:15 crc kubenswrapper[4822]: I0224 09:20:15.600774 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0c5851e-7f79-4a8e-b26c-4690e67ea80f" containerName="pull" Feb 24 09:20:15 crc kubenswrapper[4822]: E0224 09:20:15.600798 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0c5851e-7f79-4a8e-b26c-4690e67ea80f" containerName="extract" Feb 24 09:20:15 crc kubenswrapper[4822]: I0224 09:20:15.600806 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0c5851e-7f79-4a8e-b26c-4690e67ea80f" containerName="extract" Feb 24 09:20:15 crc kubenswrapper[4822]: E0224 09:20:15.600818 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0c5851e-7f79-4a8e-b26c-4690e67ea80f" containerName="util" Feb 24 09:20:15 crc kubenswrapper[4822]: I0224 09:20:15.600825 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0c5851e-7f79-4a8e-b26c-4690e67ea80f" containerName="util" Feb 24 09:20:15 crc kubenswrapper[4822]: I0224 09:20:15.600949 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0c5851e-7f79-4a8e-b26c-4690e67ea80f" containerName="extract" Feb 24 09:20:15 crc kubenswrapper[4822]: I0224 09:20:15.601388 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-jfk5h" Feb 24 09:20:15 crc kubenswrapper[4822]: I0224 09:20:15.603365 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 24 09:20:15 crc kubenswrapper[4822]: I0224 09:20:15.604533 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 24 09:20:15 crc kubenswrapper[4822]: I0224 09:20:15.604663 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-xcwnx" Feb 24 09:20:15 crc kubenswrapper[4822]: I0224 09:20:15.616719 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-jfk5h"] Feb 24 09:20:15 crc kubenswrapper[4822]: I0224 09:20:15.733446 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhxq7\" (UniqueName: \"kubernetes.io/projected/887dabae-3c4e-46ad-81d5-2f6372340d46-kube-api-access-xhxq7\") pod \"nmstate-operator-694c9596b7-jfk5h\" (UID: \"887dabae-3c4e-46ad-81d5-2f6372340d46\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-jfk5h" Feb 24 09:20:15 crc kubenswrapper[4822]: I0224 09:20:15.835335 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhxq7\" (UniqueName: \"kubernetes.io/projected/887dabae-3c4e-46ad-81d5-2f6372340d46-kube-api-access-xhxq7\") pod \"nmstate-operator-694c9596b7-jfk5h\" (UID: \"887dabae-3c4e-46ad-81d5-2f6372340d46\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-jfk5h" Feb 24 09:20:15 crc kubenswrapper[4822]: I0224 09:20:15.867094 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhxq7\" (UniqueName: \"kubernetes.io/projected/887dabae-3c4e-46ad-81d5-2f6372340d46-kube-api-access-xhxq7\") pod \"nmstate-operator-694c9596b7-jfk5h\" (UID: \"887dabae-3c4e-46ad-81d5-2f6372340d46\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-jfk5h" Feb 24 09:20:15 crc kubenswrapper[4822]: I0224 09:20:15.922856 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-jfk5h" Feb 24 09:20:16 crc kubenswrapper[4822]: I0224 09:20:16.179422 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-jfk5h"] Feb 24 09:20:16 crc kubenswrapper[4822]: W0224 09:20:16.188747 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod887dabae_3c4e_46ad_81d5_2f6372340d46.slice/crio-56e8046cce463033384940633209a901a98bb40212dcc4df059b8887509962b1 WatchSource:0}: Error finding container 56e8046cce463033384940633209a901a98bb40212dcc4df059b8887509962b1: Status 404 returned error can't find the container with id 56e8046cce463033384940633209a901a98bb40212dcc4df059b8887509962b1 Feb 24 09:20:16 crc kubenswrapper[4822]: I0224 09:20:16.379052 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-jfk5h" event={"ID":"887dabae-3c4e-46ad-81d5-2f6372340d46","Type":"ContainerStarted","Data":"56e8046cce463033384940633209a901a98bb40212dcc4df059b8887509962b1"} Feb 24 09:20:19 crc kubenswrapper[4822]: I0224 09:20:19.401505 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-jfk5h" event={"ID":"887dabae-3c4e-46ad-81d5-2f6372340d46","Type":"ContainerStarted","Data":"5a384e798aae3c4b32e07411abf1bf3ef40d0029b5e7f31b758dea263f28c6a5"} Feb 24 09:20:19 crc kubenswrapper[4822]: I0224 09:20:19.432871 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-jfk5h" podStartSLOduration=2.364806753 podStartE2EDuration="4.432841733s" podCreationTimestamp="2026-02-24 09:20:15 +0000 UTC" firstStartedPulling="2026-02-24 09:20:16.190312538 +0000 UTC m=+738.578075096" lastFinishedPulling="2026-02-24 09:20:18.258347518 +0000 UTC m=+740.646110076" observedRunningTime="2026-02-24 09:20:19.428009289 +0000 UTC m=+741.815771927" watchObservedRunningTime="2026-02-24 09:20:19.432841733 +0000 UTC m=+741.820604311" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.393729 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-m4jxs"] Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.394634 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-m4jxs" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.397461 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-s2zgr" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.401036 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns4dj\" (UniqueName: \"kubernetes.io/projected/8f7ac8ea-81e2-4ad5-8530-34012de52e42-kube-api-access-ns4dj\") pod \"nmstate-metrics-58c85c668d-m4jxs\" (UID: \"8f7ac8ea-81e2-4ad5-8530-34012de52e42\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-m4jxs" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.406749 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-m4jxs"] Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.412453 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-mhj46"] Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.413541 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mhj46" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.451331 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.456075 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-gmb7g"] Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.456867 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-gmb7g" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.479518 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-mhj46"] Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.501869 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxqx9\" (UniqueName: \"kubernetes.io/projected/a3eb20c1-d31a-43d6-83d5-66344ed5fa7d-kube-api-access-sxqx9\") pod \"nmstate-webhook-866bcb46dc-mhj46\" (UID: \"a3eb20c1-d31a-43d6-83d5-66344ed5fa7d\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mhj46" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.501943 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/43b0a80e-a4ad-4f13-9989-f3c3fb23d6ac-dbus-socket\") pod \"nmstate-handler-gmb7g\" (UID: \"43b0a80e-a4ad-4f13-9989-f3c3fb23d6ac\") " pod="openshift-nmstate/nmstate-handler-gmb7g" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.501969 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zljmg\" (UniqueName: \"kubernetes.io/projected/43b0a80e-a4ad-4f13-9989-f3c3fb23d6ac-kube-api-access-zljmg\") pod \"nmstate-handler-gmb7g\" (UID: \"43b0a80e-a4ad-4f13-9989-f3c3fb23d6ac\") " pod="openshift-nmstate/nmstate-handler-gmb7g" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.502003 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns4dj\" (UniqueName: \"kubernetes.io/projected/8f7ac8ea-81e2-4ad5-8530-34012de52e42-kube-api-access-ns4dj\") pod \"nmstate-metrics-58c85c668d-m4jxs\" (UID: \"8f7ac8ea-81e2-4ad5-8530-34012de52e42\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-m4jxs" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.502443 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/43b0a80e-a4ad-4f13-9989-f3c3fb23d6ac-nmstate-lock\") pod \"nmstate-handler-gmb7g\" (UID: \"43b0a80e-a4ad-4f13-9989-f3c3fb23d6ac\") " pod="openshift-nmstate/nmstate-handler-gmb7g" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.502518 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/43b0a80e-a4ad-4f13-9989-f3c3fb23d6ac-ovs-socket\") pod \"nmstate-handler-gmb7g\" (UID: \"43b0a80e-a4ad-4f13-9989-f3c3fb23d6ac\") " pod="openshift-nmstate/nmstate-handler-gmb7g" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.502565 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a3eb20c1-d31a-43d6-83d5-66344ed5fa7d-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-mhj46\" (UID: \"a3eb20c1-d31a-43d6-83d5-66344ed5fa7d\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mhj46" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.525465 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns4dj\" (UniqueName: \"kubernetes.io/projected/8f7ac8ea-81e2-4ad5-8530-34012de52e42-kube-api-access-ns4dj\") pod \"nmstate-metrics-58c85c668d-m4jxs\" (UID: \"8f7ac8ea-81e2-4ad5-8530-34012de52e42\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-m4jxs" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.543486 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mr279"] Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.544451 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mr279" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.550215 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-hrzxc" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.550459 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.550667 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.555025 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mr279"] Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.604090 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxqx9\" (UniqueName: \"kubernetes.io/projected/a3eb20c1-d31a-43d6-83d5-66344ed5fa7d-kube-api-access-sxqx9\") pod \"nmstate-webhook-866bcb46dc-mhj46\" (UID: \"a3eb20c1-d31a-43d6-83d5-66344ed5fa7d\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mhj46" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.604130 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5v4p\" (UniqueName: \"kubernetes.io/projected/586c31ab-56d5-49e0-9eba-f10ddce76632-kube-api-access-h5v4p\") pod \"nmstate-console-plugin-5c78fc5d65-mr279\" (UID: \"586c31ab-56d5-49e0-9eba-f10ddce76632\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mr279" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.604160 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/43b0a80e-a4ad-4f13-9989-f3c3fb23d6ac-dbus-socket\") pod \"nmstate-handler-gmb7g\" (UID: \"43b0a80e-a4ad-4f13-9989-f3c3fb23d6ac\") " pod="openshift-nmstate/nmstate-handler-gmb7g" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.604181 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zljmg\" (UniqueName: \"kubernetes.io/projected/43b0a80e-a4ad-4f13-9989-f3c3fb23d6ac-kube-api-access-zljmg\") pod \"nmstate-handler-gmb7g\" (UID: \"43b0a80e-a4ad-4f13-9989-f3c3fb23d6ac\") " pod="openshift-nmstate/nmstate-handler-gmb7g" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.604211 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/43b0a80e-a4ad-4f13-9989-f3c3fb23d6ac-nmstate-lock\") pod \"nmstate-handler-gmb7g\" (UID: \"43b0a80e-a4ad-4f13-9989-f3c3fb23d6ac\") " pod="openshift-nmstate/nmstate-handler-gmb7g" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.604229 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/586c31ab-56d5-49e0-9eba-f10ddce76632-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-mr279\" (UID: \"586c31ab-56d5-49e0-9eba-f10ddce76632\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mr279" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.604253 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/43b0a80e-a4ad-4f13-9989-f3c3fb23d6ac-ovs-socket\") pod \"nmstate-handler-gmb7g\" (UID: \"43b0a80e-a4ad-4f13-9989-f3c3fb23d6ac\") " pod="openshift-nmstate/nmstate-handler-gmb7g" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.604275 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/586c31ab-56d5-49e0-9eba-f10ddce76632-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-mr279\" (UID: \"586c31ab-56d5-49e0-9eba-f10ddce76632\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mr279" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.604300 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a3eb20c1-d31a-43d6-83d5-66344ed5fa7d-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-mhj46\" (UID: \"a3eb20c1-d31a-43d6-83d5-66344ed5fa7d\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mhj46" Feb 24 09:20:20 crc kubenswrapper[4822]: E0224 09:20:20.604416 4822 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 24 09:20:20 crc kubenswrapper[4822]: E0224 09:20:20.604468 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3eb20c1-d31a-43d6-83d5-66344ed5fa7d-tls-key-pair podName:a3eb20c1-d31a-43d6-83d5-66344ed5fa7d nodeName:}" failed. No retries permitted until 2026-02-24 09:20:21.104450327 +0000 UTC m=+743.492212875 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/a3eb20c1-d31a-43d6-83d5-66344ed5fa7d-tls-key-pair") pod "nmstate-webhook-866bcb46dc-mhj46" (UID: "a3eb20c1-d31a-43d6-83d5-66344ed5fa7d") : secret "openshift-nmstate-webhook" not found Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.604704 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/43b0a80e-a4ad-4f13-9989-f3c3fb23d6ac-nmstate-lock\") pod \"nmstate-handler-gmb7g\" (UID: \"43b0a80e-a4ad-4f13-9989-f3c3fb23d6ac\") " pod="openshift-nmstate/nmstate-handler-gmb7g" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.604848 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/43b0a80e-a4ad-4f13-9989-f3c3fb23d6ac-ovs-socket\") pod \"nmstate-handler-gmb7g\" (UID: \"43b0a80e-a4ad-4f13-9989-f3c3fb23d6ac\") " pod="openshift-nmstate/nmstate-handler-gmb7g" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.605013 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/43b0a80e-a4ad-4f13-9989-f3c3fb23d6ac-dbus-socket\") pod \"nmstate-handler-gmb7g\" (UID: \"43b0a80e-a4ad-4f13-9989-f3c3fb23d6ac\") " pod="openshift-nmstate/nmstate-handler-gmb7g" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.620348 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxqx9\" (UniqueName: \"kubernetes.io/projected/a3eb20c1-d31a-43d6-83d5-66344ed5fa7d-kube-api-access-sxqx9\") pod \"nmstate-webhook-866bcb46dc-mhj46\" (UID: \"a3eb20c1-d31a-43d6-83d5-66344ed5fa7d\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mhj46" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.623196 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zljmg\" (UniqueName: \"kubernetes.io/projected/43b0a80e-a4ad-4f13-9989-f3c3fb23d6ac-kube-api-access-zljmg\") pod \"nmstate-handler-gmb7g\" (UID: \"43b0a80e-a4ad-4f13-9989-f3c3fb23d6ac\") " pod="openshift-nmstate/nmstate-handler-gmb7g" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.705609 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/586c31ab-56d5-49e0-9eba-f10ddce76632-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-mr279\" (UID: \"586c31ab-56d5-49e0-9eba-f10ddce76632\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mr279" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.705666 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/586c31ab-56d5-49e0-9eba-f10ddce76632-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-mr279\" (UID: \"586c31ab-56d5-49e0-9eba-f10ddce76632\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mr279" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.705724 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5v4p\" (UniqueName: \"kubernetes.io/projected/586c31ab-56d5-49e0-9eba-f10ddce76632-kube-api-access-h5v4p\") pod \"nmstate-console-plugin-5c78fc5d65-mr279\" (UID: \"586c31ab-56d5-49e0-9eba-f10ddce76632\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mr279" Feb 24 09:20:20 crc kubenswrapper[4822]: E0224 09:20:20.705875 4822 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 24 09:20:20 crc kubenswrapper[4822]: E0224 09:20:20.705988 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/586c31ab-56d5-49e0-9eba-f10ddce76632-plugin-serving-cert podName:586c31ab-56d5-49e0-9eba-f10ddce76632 nodeName:}" failed. No retries permitted until 2026-02-24 09:20:21.205969307 +0000 UTC m=+743.593731845 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/586c31ab-56d5-49e0-9eba-f10ddce76632-plugin-serving-cert") pod "nmstate-console-plugin-5c78fc5d65-mr279" (UID: "586c31ab-56d5-49e0-9eba-f10ddce76632") : secret "plugin-serving-cert" not found Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.706634 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/586c31ab-56d5-49e0-9eba-f10ddce76632-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-mr279\" (UID: \"586c31ab-56d5-49e0-9eba-f10ddce76632\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mr279" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.724897 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-65d4dc5f6b-nsffs"] Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.725552 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-65d4dc5f6b-nsffs" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.732860 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5v4p\" (UniqueName: \"kubernetes.io/projected/586c31ab-56d5-49e0-9eba-f10ddce76632-kube-api-access-h5v4p\") pod \"nmstate-console-plugin-5c78fc5d65-mr279\" (UID: \"586c31ab-56d5-49e0-9eba-f10ddce76632\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mr279" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.736173 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-65d4dc5f6b-nsffs"] Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.761260 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-m4jxs" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.775207 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-gmb7g" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.808059 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/86614a4f-6c67-4c12-956e-ac12a47ebddb-console-serving-cert\") pod \"console-65d4dc5f6b-nsffs\" (UID: \"86614a4f-6c67-4c12-956e-ac12a47ebddb\") " pod="openshift-console/console-65d4dc5f6b-nsffs" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.808118 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrvts\" (UniqueName: \"kubernetes.io/projected/86614a4f-6c67-4c12-956e-ac12a47ebddb-kube-api-access-lrvts\") pod \"console-65d4dc5f6b-nsffs\" (UID: \"86614a4f-6c67-4c12-956e-ac12a47ebddb\") " pod="openshift-console/console-65d4dc5f6b-nsffs" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.808144 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/86614a4f-6c67-4c12-956e-ac12a47ebddb-oauth-serving-cert\") pod \"console-65d4dc5f6b-nsffs\" (UID: \"86614a4f-6c67-4c12-956e-ac12a47ebddb\") " pod="openshift-console/console-65d4dc5f6b-nsffs" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.808167 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/86614a4f-6c67-4c12-956e-ac12a47ebddb-service-ca\") pod \"console-65d4dc5f6b-nsffs\" (UID: \"86614a4f-6c67-4c12-956e-ac12a47ebddb\") " pod="openshift-console/console-65d4dc5f6b-nsffs" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.808190 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/86614a4f-6c67-4c12-956e-ac12a47ebddb-console-config\") pod \"console-65d4dc5f6b-nsffs\" (UID: \"86614a4f-6c67-4c12-956e-ac12a47ebddb\") " pod="openshift-console/console-65d4dc5f6b-nsffs" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.808208 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/86614a4f-6c67-4c12-956e-ac12a47ebddb-console-oauth-config\") pod \"console-65d4dc5f6b-nsffs\" (UID: \"86614a4f-6c67-4c12-956e-ac12a47ebddb\") " pod="openshift-console/console-65d4dc5f6b-nsffs" Feb 24 09:20:20 crc kubenswrapper[4822]: I0224 09:20:20.808225 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86614a4f-6c67-4c12-956e-ac12a47ebddb-trusted-ca-bundle\") pod \"console-65d4dc5f6b-nsffs\" (UID: \"86614a4f-6c67-4c12-956e-ac12a47ebddb\") " pod="openshift-console/console-65d4dc5f6b-nsffs" Feb 24 09:20:21 crc kubenswrapper[4822]: I0224 09:20:20.909028 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/86614a4f-6c67-4c12-956e-ac12a47ebddb-console-serving-cert\") pod \"console-65d4dc5f6b-nsffs\" (UID: \"86614a4f-6c67-4c12-956e-ac12a47ebddb\") " pod="openshift-console/console-65d4dc5f6b-nsffs" Feb 24 09:20:21 crc kubenswrapper[4822]: I0224 09:20:20.909415 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrvts\" (UniqueName: \"kubernetes.io/projected/86614a4f-6c67-4c12-956e-ac12a47ebddb-kube-api-access-lrvts\") pod \"console-65d4dc5f6b-nsffs\" (UID: \"86614a4f-6c67-4c12-956e-ac12a47ebddb\") " pod="openshift-console/console-65d4dc5f6b-nsffs" Feb 24 09:20:21 crc kubenswrapper[4822]: I0224 09:20:20.909457 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/86614a4f-6c67-4c12-956e-ac12a47ebddb-oauth-serving-cert\") pod \"console-65d4dc5f6b-nsffs\" (UID: \"86614a4f-6c67-4c12-956e-ac12a47ebddb\") " pod="openshift-console/console-65d4dc5f6b-nsffs" Feb 24 09:20:21 crc kubenswrapper[4822]: I0224 09:20:20.909495 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/86614a4f-6c67-4c12-956e-ac12a47ebddb-service-ca\") pod \"console-65d4dc5f6b-nsffs\" (UID: \"86614a4f-6c67-4c12-956e-ac12a47ebddb\") " pod="openshift-console/console-65d4dc5f6b-nsffs" Feb 24 09:20:21 crc kubenswrapper[4822]: I0224 09:20:20.909531 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/86614a4f-6c67-4c12-956e-ac12a47ebddb-console-config\") pod \"console-65d4dc5f6b-nsffs\" (UID: \"86614a4f-6c67-4c12-956e-ac12a47ebddb\") " pod="openshift-console/console-65d4dc5f6b-nsffs" Feb 24 09:20:21 crc kubenswrapper[4822]: I0224 09:20:20.909557 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/86614a4f-6c67-4c12-956e-ac12a47ebddb-console-oauth-config\") pod \"console-65d4dc5f6b-nsffs\" (UID: \"86614a4f-6c67-4c12-956e-ac12a47ebddb\") " pod="openshift-console/console-65d4dc5f6b-nsffs" Feb 24 09:20:21 crc kubenswrapper[4822]: I0224 09:20:20.909582 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86614a4f-6c67-4c12-956e-ac12a47ebddb-trusted-ca-bundle\") pod \"console-65d4dc5f6b-nsffs\" (UID: \"86614a4f-6c67-4c12-956e-ac12a47ebddb\") " pod="openshift-console/console-65d4dc5f6b-nsffs" Feb 24 09:20:21 crc kubenswrapper[4822]: I0224 09:20:20.910526 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/86614a4f-6c67-4c12-956e-ac12a47ebddb-oauth-serving-cert\") pod \"console-65d4dc5f6b-nsffs\" (UID: \"86614a4f-6c67-4c12-956e-ac12a47ebddb\") " pod="openshift-console/console-65d4dc5f6b-nsffs" Feb 24 09:20:21 crc kubenswrapper[4822]: I0224 09:20:20.910859 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86614a4f-6c67-4c12-956e-ac12a47ebddb-trusted-ca-bundle\") pod \"console-65d4dc5f6b-nsffs\" (UID: \"86614a4f-6c67-4c12-956e-ac12a47ebddb\") " pod="openshift-console/console-65d4dc5f6b-nsffs" Feb 24 09:20:21 crc kubenswrapper[4822]: I0224 09:20:20.913196 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/86614a4f-6c67-4c12-956e-ac12a47ebddb-console-config\") pod \"console-65d4dc5f6b-nsffs\" (UID: \"86614a4f-6c67-4c12-956e-ac12a47ebddb\") " pod="openshift-console/console-65d4dc5f6b-nsffs" Feb 24 09:20:21 crc kubenswrapper[4822]: I0224 09:20:20.913748 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/86614a4f-6c67-4c12-956e-ac12a47ebddb-console-serving-cert\") pod \"console-65d4dc5f6b-nsffs\" (UID: \"86614a4f-6c67-4c12-956e-ac12a47ebddb\") " pod="openshift-console/console-65d4dc5f6b-nsffs" Feb 24 09:20:21 crc kubenswrapper[4822]: I0224 09:20:20.915375 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/86614a4f-6c67-4c12-956e-ac12a47ebddb-service-ca\") pod \"console-65d4dc5f6b-nsffs\" (UID: \"86614a4f-6c67-4c12-956e-ac12a47ebddb\") " pod="openshift-console/console-65d4dc5f6b-nsffs" Feb 24 09:20:21 crc kubenswrapper[4822]: I0224 09:20:20.916148 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/86614a4f-6c67-4c12-956e-ac12a47ebddb-console-oauth-config\") pod \"console-65d4dc5f6b-nsffs\" (UID: \"86614a4f-6c67-4c12-956e-ac12a47ebddb\") " pod="openshift-console/console-65d4dc5f6b-nsffs" Feb 24 09:20:21 crc kubenswrapper[4822]: I0224 09:20:20.930000 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrvts\" (UniqueName: \"kubernetes.io/projected/86614a4f-6c67-4c12-956e-ac12a47ebddb-kube-api-access-lrvts\") pod \"console-65d4dc5f6b-nsffs\" (UID: \"86614a4f-6c67-4c12-956e-ac12a47ebddb\") " pod="openshift-console/console-65d4dc5f6b-nsffs" Feb 24 09:20:21 crc kubenswrapper[4822]: I0224 09:20:20.952671 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-m4jxs"] Feb 24 09:20:21 crc kubenswrapper[4822]: I0224 09:20:21.068293 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-65d4dc5f6b-nsffs" Feb 24 09:20:21 crc kubenswrapper[4822]: I0224 09:20:21.111604 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a3eb20c1-d31a-43d6-83d5-66344ed5fa7d-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-mhj46\" (UID: \"a3eb20c1-d31a-43d6-83d5-66344ed5fa7d\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mhj46" Feb 24 09:20:21 crc kubenswrapper[4822]: I0224 09:20:21.115489 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a3eb20c1-d31a-43d6-83d5-66344ed5fa7d-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-mhj46\" (UID: \"a3eb20c1-d31a-43d6-83d5-66344ed5fa7d\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mhj46" Feb 24 09:20:21 crc kubenswrapper[4822]: I0224 09:20:21.213388 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/586c31ab-56d5-49e0-9eba-f10ddce76632-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-mr279\" (UID: \"586c31ab-56d5-49e0-9eba-f10ddce76632\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mr279" Feb 24 09:20:21 crc kubenswrapper[4822]: I0224 09:20:21.216475 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/586c31ab-56d5-49e0-9eba-f10ddce76632-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-mr279\" (UID: \"586c31ab-56d5-49e0-9eba-f10ddce76632\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mr279" Feb 24 09:20:21 crc kubenswrapper[4822]: I0224 09:20:21.369398 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mhj46" Feb 24 09:20:21 crc kubenswrapper[4822]: I0224 09:20:21.413109 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-gmb7g" event={"ID":"43b0a80e-a4ad-4f13-9989-f3c3fb23d6ac","Type":"ContainerStarted","Data":"169ae5ff83db77b17780cb525111f922234f89e135aced9e91d6bcde9349eeb5"} Feb 24 09:20:21 crc kubenswrapper[4822]: I0224 09:20:21.414106 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-m4jxs" event={"ID":"8f7ac8ea-81e2-4ad5-8530-34012de52e42","Type":"ContainerStarted","Data":"4a210ad7e9dd650d0acd7782bda6b2ea96b17f2bbcf2a666f90b25eb788b53d4"} Feb 24 09:20:21 crc kubenswrapper[4822]: I0224 09:20:21.464709 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mr279" Feb 24 09:20:21 crc kubenswrapper[4822]: I0224 09:20:21.906321 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-mhj46"] Feb 24 09:20:21 crc kubenswrapper[4822]: I0224 09:20:21.910191 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-65d4dc5f6b-nsffs"] Feb 24 09:20:21 crc kubenswrapper[4822]: I0224 09:20:21.977007 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mr279"] Feb 24 09:20:22 crc kubenswrapper[4822]: I0224 09:20:22.422260 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-65d4dc5f6b-nsffs" event={"ID":"86614a4f-6c67-4c12-956e-ac12a47ebddb","Type":"ContainerStarted","Data":"a06cbb4b1c99eb8b442d1d0a441d403b1d8439ebacfdcd578a5c6a107cb6fc63"} Feb 24 09:20:22 crc kubenswrapper[4822]: I0224 09:20:22.422592 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-65d4dc5f6b-nsffs" event={"ID":"86614a4f-6c67-4c12-956e-ac12a47ebddb","Type":"ContainerStarted","Data":"714bf8b36dd36ee569ffd050e9f05be25e5699de617275d96f8bf8841872d488"} Feb 24 09:20:22 crc kubenswrapper[4822]: I0224 09:20:22.423679 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mr279" event={"ID":"586c31ab-56d5-49e0-9eba-f10ddce76632","Type":"ContainerStarted","Data":"7c67239a640be0dd2069aecf53574a00ad122c9a87fd4b0b70a054eac028c7d2"} Feb 24 09:20:22 crc kubenswrapper[4822]: I0224 09:20:22.427099 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mhj46" event={"ID":"a3eb20c1-d31a-43d6-83d5-66344ed5fa7d","Type":"ContainerStarted","Data":"2b4a4691996e4127592d1815e438e6237ffb126655ae258cc08a2ebc108367d1"} Feb 24 09:20:22 crc kubenswrapper[4822]: I0224 09:20:22.441393 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-65d4dc5f6b-nsffs" podStartSLOduration=2.44137752 podStartE2EDuration="2.44137752s" podCreationTimestamp="2026-02-24 09:20:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:20:22.438396056 +0000 UTC m=+744.826158624" watchObservedRunningTime="2026-02-24 09:20:22.44137752 +0000 UTC m=+744.829140098" Feb 24 09:20:23 crc kubenswrapper[4822]: I0224 09:20:23.436192 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mhj46" event={"ID":"a3eb20c1-d31a-43d6-83d5-66344ed5fa7d","Type":"ContainerStarted","Data":"f825367389652e4c97f8226b05fdcf8be644c54877dd80323a180917c8f2ad00"} Feb 24 09:20:23 crc kubenswrapper[4822]: I0224 09:20:23.436563 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mhj46" Feb 24 09:20:23 crc kubenswrapper[4822]: I0224 09:20:23.437682 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-gmb7g" event={"ID":"43b0a80e-a4ad-4f13-9989-f3c3fb23d6ac","Type":"ContainerStarted","Data":"3a76549fcba2f698bdcc0c7274effa0552cfa2e9310e1dfcb38d0061da25903f"} Feb 24 09:20:23 crc kubenswrapper[4822]: I0224 09:20:23.438216 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-gmb7g" Feb 24 09:20:23 crc kubenswrapper[4822]: I0224 09:20:23.440208 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-m4jxs" event={"ID":"8f7ac8ea-81e2-4ad5-8530-34012de52e42","Type":"ContainerStarted","Data":"6025ccc7407ff806b3954d39b9f413d18cbf79b5eb04e5cded95a12343620b15"} Feb 24 09:20:23 crc kubenswrapper[4822]: I0224 09:20:23.457080 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mhj46" podStartSLOduration=2.266481878 podStartE2EDuration="3.457064717s" podCreationTimestamp="2026-02-24 09:20:20 +0000 UTC" firstStartedPulling="2026-02-24 09:20:21.924986234 +0000 UTC m=+744.312748782" lastFinishedPulling="2026-02-24 09:20:23.115569073 +0000 UTC m=+745.503331621" observedRunningTime="2026-02-24 09:20:23.451740189 +0000 UTC m=+745.839502747" watchObservedRunningTime="2026-02-24 09:20:23.457064717 +0000 UTC m=+745.844827265" Feb 24 09:20:23 crc kubenswrapper[4822]: I0224 09:20:23.474151 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-gmb7g" podStartSLOduration=1.157528917 podStartE2EDuration="3.474134309s" podCreationTimestamp="2026-02-24 09:20:20 +0000 UTC" firstStartedPulling="2026-02-24 09:20:20.800021061 +0000 UTC m=+743.187783609" lastFinishedPulling="2026-02-24 09:20:23.116626423 +0000 UTC m=+745.504389001" observedRunningTime="2026-02-24 09:20:23.472828634 +0000 UTC m=+745.860591192" watchObservedRunningTime="2026-02-24 09:20:23.474134309 +0000 UTC m=+745.861896857" Feb 24 09:20:24 crc kubenswrapper[4822]: I0224 09:20:24.447499 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mr279" event={"ID":"586c31ab-56d5-49e0-9eba-f10ddce76632","Type":"ContainerStarted","Data":"ecfe37d1f15939805252bc4c31789e863bf49a5e0135baf2048f3aafc1bc3d5f"} Feb 24 09:20:24 crc kubenswrapper[4822]: I0224 09:20:24.466645 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mr279" podStartSLOduration=2.466953078 podStartE2EDuration="4.466626956s" podCreationTimestamp="2026-02-24 09:20:20 +0000 UTC" firstStartedPulling="2026-02-24 09:20:21.987325569 +0000 UTC m=+744.375088117" lastFinishedPulling="2026-02-24 09:20:23.986999447 +0000 UTC m=+746.374761995" observedRunningTime="2026-02-24 09:20:24.461596076 +0000 UTC m=+746.849358634" watchObservedRunningTime="2026-02-24 09:20:24.466626956 +0000 UTC m=+746.854389514" Feb 24 09:20:25 crc kubenswrapper[4822]: I0224 09:20:25.457740 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-m4jxs" event={"ID":"8f7ac8ea-81e2-4ad5-8530-34012de52e42","Type":"ContainerStarted","Data":"dd0453e0500391ee90f86b45f9cec5a714418ea6eea1cd0022ea35cc7a367be3"} Feb 24 09:20:25 crc kubenswrapper[4822]: I0224 09:20:25.479524 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-m4jxs" podStartSLOduration=1.413316378 podStartE2EDuration="5.479497725s" podCreationTimestamp="2026-02-24 09:20:20 +0000 UTC" firstStartedPulling="2026-02-24 09:20:20.958184149 +0000 UTC m=+743.345946717" lastFinishedPulling="2026-02-24 09:20:25.024365516 +0000 UTC m=+747.412128064" observedRunningTime="2026-02-24 09:20:25.479402992 +0000 UTC m=+747.867165580" watchObservedRunningTime="2026-02-24 09:20:25.479497725 +0000 UTC m=+747.867260313" Feb 24 09:20:30 crc kubenswrapper[4822]: I0224 09:20:30.807418 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-gmb7g" Feb 24 09:20:31 crc kubenswrapper[4822]: I0224 09:20:31.068810 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-65d4dc5f6b-nsffs" Feb 24 09:20:31 crc kubenswrapper[4822]: I0224 09:20:31.069001 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-65d4dc5f6b-nsffs" Feb 24 09:20:31 crc kubenswrapper[4822]: I0224 09:20:31.077185 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-65d4dc5f6b-nsffs" Feb 24 09:20:31 crc kubenswrapper[4822]: I0224 09:20:31.516420 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-65d4dc5f6b-nsffs" Feb 24 09:20:31 crc kubenswrapper[4822]: I0224 09:20:31.603076 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-6shfw"] Feb 24 09:20:41 crc kubenswrapper[4822]: I0224 09:20:41.380358 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mhj46" Feb 24 09:20:45 crc kubenswrapper[4822]: I0224 09:20:45.676500 4822 patch_prober.go:28] interesting pod/machine-config-daemon-qd752 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:20:45 crc kubenswrapper[4822]: I0224 09:20:45.676979 4822 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:20:55 crc kubenswrapper[4822]: I0224 09:20:55.849986 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jgttk"] Feb 24 09:20:55 crc kubenswrapper[4822]: I0224 09:20:55.852340 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jgttk" Feb 24 09:20:55 crc kubenswrapper[4822]: I0224 09:20:55.856082 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 24 09:20:55 crc kubenswrapper[4822]: I0224 09:20:55.857662 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jgttk"] Feb 24 09:20:55 crc kubenswrapper[4822]: I0224 09:20:55.921508 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/924f4be2-107e-4d42-ab38-bed4a5d96d18-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jgttk\" (UID: \"924f4be2-107e-4d42-ab38-bed4a5d96d18\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jgttk" Feb 24 09:20:55 crc kubenswrapper[4822]: I0224 09:20:55.921583 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/924f4be2-107e-4d42-ab38-bed4a5d96d18-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jgttk\" (UID: \"924f4be2-107e-4d42-ab38-bed4a5d96d18\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jgttk" Feb 24 09:20:55 crc kubenswrapper[4822]: I0224 09:20:55.921667 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cth9\" (UniqueName: \"kubernetes.io/projected/924f4be2-107e-4d42-ab38-bed4a5d96d18-kube-api-access-4cth9\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jgttk\" (UID: \"924f4be2-107e-4d42-ab38-bed4a5d96d18\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jgttk" Feb 24 09:20:56 crc kubenswrapper[4822]: I0224 09:20:56.022606 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/924f4be2-107e-4d42-ab38-bed4a5d96d18-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jgttk\" (UID: \"924f4be2-107e-4d42-ab38-bed4a5d96d18\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jgttk" Feb 24 09:20:56 crc kubenswrapper[4822]: I0224 09:20:56.022682 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/924f4be2-107e-4d42-ab38-bed4a5d96d18-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jgttk\" (UID: \"924f4be2-107e-4d42-ab38-bed4a5d96d18\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jgttk" Feb 24 09:20:56 crc kubenswrapper[4822]: I0224 09:20:56.022739 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cth9\" (UniqueName: \"kubernetes.io/projected/924f4be2-107e-4d42-ab38-bed4a5d96d18-kube-api-access-4cth9\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jgttk\" (UID: \"924f4be2-107e-4d42-ab38-bed4a5d96d18\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jgttk" Feb 24 09:20:56 crc kubenswrapper[4822]: I0224 09:20:56.023570 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/924f4be2-107e-4d42-ab38-bed4a5d96d18-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jgttk\" (UID: \"924f4be2-107e-4d42-ab38-bed4a5d96d18\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jgttk" Feb 24 09:20:56 crc kubenswrapper[4822]: I0224 09:20:56.023865 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/924f4be2-107e-4d42-ab38-bed4a5d96d18-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jgttk\" (UID: \"924f4be2-107e-4d42-ab38-bed4a5d96d18\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jgttk" Feb 24 09:20:56 crc kubenswrapper[4822]: I0224 09:20:56.054081 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cth9\" (UniqueName: \"kubernetes.io/projected/924f4be2-107e-4d42-ab38-bed4a5d96d18-kube-api-access-4cth9\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jgttk\" (UID: \"924f4be2-107e-4d42-ab38-bed4a5d96d18\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jgttk" Feb 24 09:20:56 crc kubenswrapper[4822]: I0224 09:20:56.178167 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jgttk" Feb 24 09:20:56 crc kubenswrapper[4822]: I0224 09:20:56.482357 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jgttk"] Feb 24 09:20:56 crc kubenswrapper[4822]: I0224 09:20:56.653429 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-6shfw" podUID="996cd1b1-5fb9-448d-bcb6-933ee6c0a31a" containerName="console" containerID="cri-o://7f9b8dbc94185b2f1aedddcdfcc2580d58b3677c8e6c973bfae4d8d29e11c465" gracePeriod=15 Feb 24 09:20:56 crc kubenswrapper[4822]: I0224 09:20:56.695383 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jgttk" event={"ID":"924f4be2-107e-4d42-ab38-bed4a5d96d18","Type":"ContainerStarted","Data":"4d246a0b83927f881f4d4631c91d1bd29b92ee5c9ea12ec1ec813345f4a9492c"} Feb 24 09:20:56 crc kubenswrapper[4822]: I0224 09:20:56.695448 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jgttk" event={"ID":"924f4be2-107e-4d42-ab38-bed4a5d96d18","Type":"ContainerStarted","Data":"3fc1102aad93e60f186646f1512a0c0e1ffd59a094b9676c5599cceb44829c19"} Feb 24 09:20:56 crc kubenswrapper[4822]: I0224 09:20:56.982203 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-6shfw_996cd1b1-5fb9-448d-bcb6-933ee6c0a31a/console/0.log" Feb 24 09:20:56 crc kubenswrapper[4822]: I0224 09:20:56.982278 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6shfw" Feb 24 09:20:57 crc kubenswrapper[4822]: I0224 09:20:57.035218 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-console-oauth-config\") pod \"996cd1b1-5fb9-448d-bcb6-933ee6c0a31a\" (UID: \"996cd1b1-5fb9-448d-bcb6-933ee6c0a31a\") " Feb 24 09:20:57 crc kubenswrapper[4822]: I0224 09:20:57.035305 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-trusted-ca-bundle\") pod \"996cd1b1-5fb9-448d-bcb6-933ee6c0a31a\" (UID: \"996cd1b1-5fb9-448d-bcb6-933ee6c0a31a\") " Feb 24 09:20:57 crc kubenswrapper[4822]: I0224 09:20:57.035340 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-service-ca\") pod \"996cd1b1-5fb9-448d-bcb6-933ee6c0a31a\" (UID: \"996cd1b1-5fb9-448d-bcb6-933ee6c0a31a\") " Feb 24 09:20:57 crc kubenswrapper[4822]: I0224 09:20:57.035393 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2sq8\" (UniqueName: \"kubernetes.io/projected/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-kube-api-access-w2sq8\") pod \"996cd1b1-5fb9-448d-bcb6-933ee6c0a31a\" (UID: \"996cd1b1-5fb9-448d-bcb6-933ee6c0a31a\") " Feb 24 09:20:57 crc kubenswrapper[4822]: I0224 09:20:57.036211 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-service-ca" (OuterVolumeSpecName: "service-ca") pod "996cd1b1-5fb9-448d-bcb6-933ee6c0a31a" (UID: "996cd1b1-5fb9-448d-bcb6-933ee6c0a31a"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:20:57 crc kubenswrapper[4822]: I0224 09:20:57.036228 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "996cd1b1-5fb9-448d-bcb6-933ee6c0a31a" (UID: "996cd1b1-5fb9-448d-bcb6-933ee6c0a31a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:20:57 crc kubenswrapper[4822]: I0224 09:20:57.036246 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-console-config\") pod \"996cd1b1-5fb9-448d-bcb6-933ee6c0a31a\" (UID: \"996cd1b1-5fb9-448d-bcb6-933ee6c0a31a\") " Feb 24 09:20:57 crc kubenswrapper[4822]: I0224 09:20:57.036360 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-oauth-serving-cert\") pod \"996cd1b1-5fb9-448d-bcb6-933ee6c0a31a\" (UID: \"996cd1b1-5fb9-448d-bcb6-933ee6c0a31a\") " Feb 24 09:20:57 crc kubenswrapper[4822]: I0224 09:20:57.036438 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-console-serving-cert\") pod \"996cd1b1-5fb9-448d-bcb6-933ee6c0a31a\" (UID: \"996cd1b1-5fb9-448d-bcb6-933ee6c0a31a\") " Feb 24 09:20:57 crc kubenswrapper[4822]: I0224 09:20:57.036934 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "996cd1b1-5fb9-448d-bcb6-933ee6c0a31a" (UID: "996cd1b1-5fb9-448d-bcb6-933ee6c0a31a"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:20:57 crc kubenswrapper[4822]: I0224 09:20:57.037034 4822 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:20:57 crc kubenswrapper[4822]: I0224 09:20:57.037079 4822 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 09:20:57 crc kubenswrapper[4822]: I0224 09:20:57.037099 4822 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:20:57 crc kubenswrapper[4822]: I0224 09:20:57.037206 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-console-config" (OuterVolumeSpecName: "console-config") pod "996cd1b1-5fb9-448d-bcb6-933ee6c0a31a" (UID: "996cd1b1-5fb9-448d-bcb6-933ee6c0a31a"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:20:57 crc kubenswrapper[4822]: I0224 09:20:57.040870 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-kube-api-access-w2sq8" (OuterVolumeSpecName: "kube-api-access-w2sq8") pod "996cd1b1-5fb9-448d-bcb6-933ee6c0a31a" (UID: "996cd1b1-5fb9-448d-bcb6-933ee6c0a31a"). InnerVolumeSpecName "kube-api-access-w2sq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:20:57 crc kubenswrapper[4822]: I0224 09:20:57.040895 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "996cd1b1-5fb9-448d-bcb6-933ee6c0a31a" (UID: "996cd1b1-5fb9-448d-bcb6-933ee6c0a31a"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:20:57 crc kubenswrapper[4822]: I0224 09:20:57.041196 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "996cd1b1-5fb9-448d-bcb6-933ee6c0a31a" (UID: "996cd1b1-5fb9-448d-bcb6-933ee6c0a31a"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:20:57 crc kubenswrapper[4822]: I0224 09:20:57.138250 4822 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-console-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:20:57 crc kubenswrapper[4822]: I0224 09:20:57.138311 4822 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:20:57 crc kubenswrapper[4822]: I0224 09:20:57.138333 4822 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:20:57 crc kubenswrapper[4822]: I0224 09:20:57.138351 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2sq8\" (UniqueName: \"kubernetes.io/projected/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a-kube-api-access-w2sq8\") on node \"crc\" DevicePath \"\"" Feb 24 09:20:57 crc kubenswrapper[4822]: I0224 09:20:57.704593 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-6shfw_996cd1b1-5fb9-448d-bcb6-933ee6c0a31a/console/0.log" Feb 24 09:20:57 crc kubenswrapper[4822]: I0224 09:20:57.704675 4822 generic.go:334] "Generic (PLEG): container finished" podID="996cd1b1-5fb9-448d-bcb6-933ee6c0a31a" containerID="7f9b8dbc94185b2f1aedddcdfcc2580d58b3677c8e6c973bfae4d8d29e11c465" exitCode=2 Feb 24 09:20:57 crc kubenswrapper[4822]: I0224 09:20:57.704772 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6shfw" Feb 24 09:20:57 crc kubenswrapper[4822]: I0224 09:20:57.704787 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6shfw" event={"ID":"996cd1b1-5fb9-448d-bcb6-933ee6c0a31a","Type":"ContainerDied","Data":"7f9b8dbc94185b2f1aedddcdfcc2580d58b3677c8e6c973bfae4d8d29e11c465"} Feb 24 09:20:57 crc kubenswrapper[4822]: I0224 09:20:57.704871 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6shfw" event={"ID":"996cd1b1-5fb9-448d-bcb6-933ee6c0a31a","Type":"ContainerDied","Data":"45db4a43690ba9ef6cb516fb66bb9a4d05f3c02fdca16285c4e6a356d688dedb"} Feb 24 09:20:57 crc kubenswrapper[4822]: I0224 09:20:57.704942 4822 scope.go:117] "RemoveContainer" containerID="7f9b8dbc94185b2f1aedddcdfcc2580d58b3677c8e6c973bfae4d8d29e11c465" Feb 24 09:20:57 crc kubenswrapper[4822]: I0224 09:20:57.708627 4822 generic.go:334] "Generic (PLEG): container finished" podID="924f4be2-107e-4d42-ab38-bed4a5d96d18" containerID="4d246a0b83927f881f4d4631c91d1bd29b92ee5c9ea12ec1ec813345f4a9492c" exitCode=0 Feb 24 09:20:57 crc kubenswrapper[4822]: I0224 09:20:57.708667 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jgttk" event={"ID":"924f4be2-107e-4d42-ab38-bed4a5d96d18","Type":"ContainerDied","Data":"4d246a0b83927f881f4d4631c91d1bd29b92ee5c9ea12ec1ec813345f4a9492c"} Feb 24 09:20:57 crc kubenswrapper[4822]: I0224 09:20:57.741640 4822 scope.go:117] "RemoveContainer" containerID="7f9b8dbc94185b2f1aedddcdfcc2580d58b3677c8e6c973bfae4d8d29e11c465" Feb 24 09:20:57 crc kubenswrapper[4822]: E0224 09:20:57.743473 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f9b8dbc94185b2f1aedddcdfcc2580d58b3677c8e6c973bfae4d8d29e11c465\": container with ID starting with 7f9b8dbc94185b2f1aedddcdfcc2580d58b3677c8e6c973bfae4d8d29e11c465 not found: ID does not exist" containerID="7f9b8dbc94185b2f1aedddcdfcc2580d58b3677c8e6c973bfae4d8d29e11c465" Feb 24 09:20:57 crc kubenswrapper[4822]: I0224 09:20:57.743530 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f9b8dbc94185b2f1aedddcdfcc2580d58b3677c8e6c973bfae4d8d29e11c465"} err="failed to get container status \"7f9b8dbc94185b2f1aedddcdfcc2580d58b3677c8e6c973bfae4d8d29e11c465\": rpc error: code = NotFound desc = could not find container \"7f9b8dbc94185b2f1aedddcdfcc2580d58b3677c8e6c973bfae4d8d29e11c465\": container with ID starting with 7f9b8dbc94185b2f1aedddcdfcc2580d58b3677c8e6c973bfae4d8d29e11c465 not found: ID does not exist" Feb 24 09:20:57 crc kubenswrapper[4822]: I0224 09:20:57.765540 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-6shfw"] Feb 24 09:20:57 crc kubenswrapper[4822]: I0224 09:20:57.773471 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-6shfw"] Feb 24 09:20:58 crc kubenswrapper[4822]: I0224 09:20:58.364078 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="996cd1b1-5fb9-448d-bcb6-933ee6c0a31a" path="/var/lib/kubelet/pods/996cd1b1-5fb9-448d-bcb6-933ee6c0a31a/volumes" Feb 24 09:20:59 crc kubenswrapper[4822]: I0224 09:20:59.729726 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jgttk" event={"ID":"924f4be2-107e-4d42-ab38-bed4a5d96d18","Type":"ContainerDied","Data":"fba989f8f3c2f6157389f3b27021587249e718b2d93b01a78dee8a36716c8c2e"} Feb 24 09:20:59 crc kubenswrapper[4822]: I0224 09:20:59.729701 4822 generic.go:334] "Generic (PLEG): container finished" podID="924f4be2-107e-4d42-ab38-bed4a5d96d18" containerID="fba989f8f3c2f6157389f3b27021587249e718b2d93b01a78dee8a36716c8c2e" exitCode=0 Feb 24 09:21:00 crc kubenswrapper[4822]: I0224 09:21:00.740665 4822 generic.go:334] "Generic (PLEG): container finished" podID="924f4be2-107e-4d42-ab38-bed4a5d96d18" containerID="4f003afc0b5b54a00938852bc5b66b012b2c5f1eeeaec6d7492972693c9baf84" exitCode=0 Feb 24 09:21:00 crc kubenswrapper[4822]: I0224 09:21:00.740731 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jgttk" event={"ID":"924f4be2-107e-4d42-ab38-bed4a5d96d18","Type":"ContainerDied","Data":"4f003afc0b5b54a00938852bc5b66b012b2c5f1eeeaec6d7492972693c9baf84"} Feb 24 09:21:02 crc kubenswrapper[4822]: I0224 09:21:02.106372 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jgttk" Feb 24 09:21:02 crc kubenswrapper[4822]: I0224 09:21:02.214062 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/924f4be2-107e-4d42-ab38-bed4a5d96d18-util\") pod \"924f4be2-107e-4d42-ab38-bed4a5d96d18\" (UID: \"924f4be2-107e-4d42-ab38-bed4a5d96d18\") " Feb 24 09:21:02 crc kubenswrapper[4822]: I0224 09:21:02.214286 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cth9\" (UniqueName: \"kubernetes.io/projected/924f4be2-107e-4d42-ab38-bed4a5d96d18-kube-api-access-4cth9\") pod \"924f4be2-107e-4d42-ab38-bed4a5d96d18\" (UID: \"924f4be2-107e-4d42-ab38-bed4a5d96d18\") " Feb 24 09:21:02 crc kubenswrapper[4822]: I0224 09:21:02.214387 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/924f4be2-107e-4d42-ab38-bed4a5d96d18-bundle\") pod \"924f4be2-107e-4d42-ab38-bed4a5d96d18\" (UID: \"924f4be2-107e-4d42-ab38-bed4a5d96d18\") " Feb 24 09:21:02 crc kubenswrapper[4822]: I0224 09:21:02.216075 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/924f4be2-107e-4d42-ab38-bed4a5d96d18-bundle" (OuterVolumeSpecName: "bundle") pod "924f4be2-107e-4d42-ab38-bed4a5d96d18" (UID: "924f4be2-107e-4d42-ab38-bed4a5d96d18"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:21:02 crc kubenswrapper[4822]: I0224 09:21:02.224176 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/924f4be2-107e-4d42-ab38-bed4a5d96d18-kube-api-access-4cth9" (OuterVolumeSpecName: "kube-api-access-4cth9") pod "924f4be2-107e-4d42-ab38-bed4a5d96d18" (UID: "924f4be2-107e-4d42-ab38-bed4a5d96d18"). InnerVolumeSpecName "kube-api-access-4cth9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:21:02 crc kubenswrapper[4822]: I0224 09:21:02.238396 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/924f4be2-107e-4d42-ab38-bed4a5d96d18-util" (OuterVolumeSpecName: "util") pod "924f4be2-107e-4d42-ab38-bed4a5d96d18" (UID: "924f4be2-107e-4d42-ab38-bed4a5d96d18"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:21:02 crc kubenswrapper[4822]: I0224 09:21:02.315905 4822 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/924f4be2-107e-4d42-ab38-bed4a5d96d18-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 09:21:02 crc kubenswrapper[4822]: I0224 09:21:02.316001 4822 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/924f4be2-107e-4d42-ab38-bed4a5d96d18-util\") on node \"crc\" DevicePath \"\"" Feb 24 09:21:02 crc kubenswrapper[4822]: I0224 09:21:02.316029 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cth9\" (UniqueName: \"kubernetes.io/projected/924f4be2-107e-4d42-ab38-bed4a5d96d18-kube-api-access-4cth9\") on node \"crc\" DevicePath \"\"" Feb 24 09:21:02 crc kubenswrapper[4822]: I0224 09:21:02.759771 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jgttk" event={"ID":"924f4be2-107e-4d42-ab38-bed4a5d96d18","Type":"ContainerDied","Data":"3fc1102aad93e60f186646f1512a0c0e1ffd59a094b9676c5599cceb44829c19"} Feb 24 09:21:02 crc kubenswrapper[4822]: I0224 09:21:02.759839 4822 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fc1102aad93e60f186646f1512a0c0e1ffd59a094b9676c5599cceb44829c19" Feb 24 09:21:02 crc kubenswrapper[4822]: I0224 09:21:02.759885 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jgttk" Feb 24 09:21:10 crc kubenswrapper[4822]: I0224 09:21:10.923253 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5cff6cb7c-2ww5h"] Feb 24 09:21:10 crc kubenswrapper[4822]: E0224 09:21:10.923977 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="996cd1b1-5fb9-448d-bcb6-933ee6c0a31a" containerName="console" Feb 24 09:21:10 crc kubenswrapper[4822]: I0224 09:21:10.923993 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="996cd1b1-5fb9-448d-bcb6-933ee6c0a31a" containerName="console" Feb 24 09:21:10 crc kubenswrapper[4822]: E0224 09:21:10.924021 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="924f4be2-107e-4d42-ab38-bed4a5d96d18" containerName="pull" Feb 24 09:21:10 crc kubenswrapper[4822]: I0224 09:21:10.924031 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="924f4be2-107e-4d42-ab38-bed4a5d96d18" containerName="pull" Feb 24 09:21:10 crc kubenswrapper[4822]: E0224 09:21:10.924048 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="924f4be2-107e-4d42-ab38-bed4a5d96d18" containerName="extract" Feb 24 09:21:10 crc kubenswrapper[4822]: I0224 09:21:10.924056 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="924f4be2-107e-4d42-ab38-bed4a5d96d18" containerName="extract" Feb 24 09:21:10 crc kubenswrapper[4822]: E0224 09:21:10.924064 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="924f4be2-107e-4d42-ab38-bed4a5d96d18" containerName="util" Feb 24 09:21:10 crc kubenswrapper[4822]: I0224 09:21:10.924073 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="924f4be2-107e-4d42-ab38-bed4a5d96d18" containerName="util" Feb 24 09:21:10 crc kubenswrapper[4822]: I0224 09:21:10.924189 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="924f4be2-107e-4d42-ab38-bed4a5d96d18" containerName="extract" Feb 24 09:21:10 crc kubenswrapper[4822]: I0224 09:21:10.924213 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="996cd1b1-5fb9-448d-bcb6-933ee6c0a31a" containerName="console" Feb 24 09:21:10 crc kubenswrapper[4822]: I0224 09:21:10.924633 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5cff6cb7c-2ww5h" Feb 24 09:21:10 crc kubenswrapper[4822]: I0224 09:21:10.926605 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 24 09:21:10 crc kubenswrapper[4822]: I0224 09:21:10.926693 4822 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 24 09:21:10 crc kubenswrapper[4822]: I0224 09:21:10.926840 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 24 09:21:10 crc kubenswrapper[4822]: I0224 09:21:10.927288 4822 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-zx5f9" Feb 24 09:21:10 crc kubenswrapper[4822]: I0224 09:21:10.927779 4822 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 24 09:21:10 crc kubenswrapper[4822]: I0224 09:21:10.953589 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5cff6cb7c-2ww5h"] Feb 24 09:21:11 crc kubenswrapper[4822]: I0224 09:21:11.029354 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e8c58f51-91b4-4a36-8795-85238805d159-apiservice-cert\") pod \"metallb-operator-controller-manager-5cff6cb7c-2ww5h\" (UID: \"e8c58f51-91b4-4a36-8795-85238805d159\") " pod="metallb-system/metallb-operator-controller-manager-5cff6cb7c-2ww5h" Feb 24 09:21:11 crc kubenswrapper[4822]: I0224 09:21:11.029434 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58sv4\" (UniqueName: \"kubernetes.io/projected/e8c58f51-91b4-4a36-8795-85238805d159-kube-api-access-58sv4\") pod \"metallb-operator-controller-manager-5cff6cb7c-2ww5h\" (UID: \"e8c58f51-91b4-4a36-8795-85238805d159\") " pod="metallb-system/metallb-operator-controller-manager-5cff6cb7c-2ww5h" Feb 24 09:21:11 crc kubenswrapper[4822]: I0224 09:21:11.029493 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e8c58f51-91b4-4a36-8795-85238805d159-webhook-cert\") pod \"metallb-operator-controller-manager-5cff6cb7c-2ww5h\" (UID: \"e8c58f51-91b4-4a36-8795-85238805d159\") " pod="metallb-system/metallb-operator-controller-manager-5cff6cb7c-2ww5h" Feb 24 09:21:11 crc kubenswrapper[4822]: I0224 09:21:11.130706 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58sv4\" (UniqueName: \"kubernetes.io/projected/e8c58f51-91b4-4a36-8795-85238805d159-kube-api-access-58sv4\") pod \"metallb-operator-controller-manager-5cff6cb7c-2ww5h\" (UID: \"e8c58f51-91b4-4a36-8795-85238805d159\") " pod="metallb-system/metallb-operator-controller-manager-5cff6cb7c-2ww5h" Feb 24 09:21:11 crc kubenswrapper[4822]: I0224 09:21:11.130788 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e8c58f51-91b4-4a36-8795-85238805d159-webhook-cert\") pod \"metallb-operator-controller-manager-5cff6cb7c-2ww5h\" (UID: \"e8c58f51-91b4-4a36-8795-85238805d159\") " pod="metallb-system/metallb-operator-controller-manager-5cff6cb7c-2ww5h" Feb 24 09:21:11 crc kubenswrapper[4822]: I0224 09:21:11.130836 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e8c58f51-91b4-4a36-8795-85238805d159-apiservice-cert\") pod \"metallb-operator-controller-manager-5cff6cb7c-2ww5h\" (UID: \"e8c58f51-91b4-4a36-8795-85238805d159\") " pod="metallb-system/metallb-operator-controller-manager-5cff6cb7c-2ww5h" Feb 24 09:21:11 crc kubenswrapper[4822]: I0224 09:21:11.137270 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e8c58f51-91b4-4a36-8795-85238805d159-apiservice-cert\") pod \"metallb-operator-controller-manager-5cff6cb7c-2ww5h\" (UID: \"e8c58f51-91b4-4a36-8795-85238805d159\") " pod="metallb-system/metallb-operator-controller-manager-5cff6cb7c-2ww5h" Feb 24 09:21:11 crc kubenswrapper[4822]: I0224 09:21:11.151607 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58sv4\" (UniqueName: \"kubernetes.io/projected/e8c58f51-91b4-4a36-8795-85238805d159-kube-api-access-58sv4\") pod \"metallb-operator-controller-manager-5cff6cb7c-2ww5h\" (UID: \"e8c58f51-91b4-4a36-8795-85238805d159\") " pod="metallb-system/metallb-operator-controller-manager-5cff6cb7c-2ww5h" Feb 24 09:21:11 crc kubenswrapper[4822]: I0224 09:21:11.152026 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e8c58f51-91b4-4a36-8795-85238805d159-webhook-cert\") pod \"metallb-operator-controller-manager-5cff6cb7c-2ww5h\" (UID: \"e8c58f51-91b4-4a36-8795-85238805d159\") " pod="metallb-system/metallb-operator-controller-manager-5cff6cb7c-2ww5h" Feb 24 09:21:11 crc kubenswrapper[4822]: I0224 09:21:11.240022 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5cff6cb7c-2ww5h" Feb 24 09:21:11 crc kubenswrapper[4822]: I0224 09:21:11.290193 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6c678965ff-mws2p"] Feb 24 09:21:11 crc kubenswrapper[4822]: I0224 09:21:11.290941 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6c678965ff-mws2p" Feb 24 09:21:11 crc kubenswrapper[4822]: I0224 09:21:11.292742 4822 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-fkpb8" Feb 24 09:21:11 crc kubenswrapper[4822]: I0224 09:21:11.293003 4822 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 24 09:21:11 crc kubenswrapper[4822]: I0224 09:21:11.301130 4822 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 24 09:21:11 crc kubenswrapper[4822]: I0224 09:21:11.305038 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6c678965ff-mws2p"] Feb 24 09:21:11 crc kubenswrapper[4822]: I0224 09:21:11.333096 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfgvt\" (UniqueName: \"kubernetes.io/projected/18ff8b6d-cea0-4eaa-933d-7c30b873d37c-kube-api-access-kfgvt\") pod \"metallb-operator-webhook-server-6c678965ff-mws2p\" (UID: \"18ff8b6d-cea0-4eaa-933d-7c30b873d37c\") " pod="metallb-system/metallb-operator-webhook-server-6c678965ff-mws2p" Feb 24 09:21:11 crc kubenswrapper[4822]: I0224 09:21:11.333189 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/18ff8b6d-cea0-4eaa-933d-7c30b873d37c-webhook-cert\") pod \"metallb-operator-webhook-server-6c678965ff-mws2p\" (UID: \"18ff8b6d-cea0-4eaa-933d-7c30b873d37c\") " pod="metallb-system/metallb-operator-webhook-server-6c678965ff-mws2p" Feb 24 09:21:11 crc kubenswrapper[4822]: I0224 09:21:11.333224 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/18ff8b6d-cea0-4eaa-933d-7c30b873d37c-apiservice-cert\") pod \"metallb-operator-webhook-server-6c678965ff-mws2p\" (UID: \"18ff8b6d-cea0-4eaa-933d-7c30b873d37c\") " pod="metallb-system/metallb-operator-webhook-server-6c678965ff-mws2p" Feb 24 09:21:11 crc kubenswrapper[4822]: I0224 09:21:11.435285 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/18ff8b6d-cea0-4eaa-933d-7c30b873d37c-webhook-cert\") pod \"metallb-operator-webhook-server-6c678965ff-mws2p\" (UID: \"18ff8b6d-cea0-4eaa-933d-7c30b873d37c\") " pod="metallb-system/metallb-operator-webhook-server-6c678965ff-mws2p" Feb 24 09:21:11 crc kubenswrapper[4822]: I0224 09:21:11.435567 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/18ff8b6d-cea0-4eaa-933d-7c30b873d37c-apiservice-cert\") pod \"metallb-operator-webhook-server-6c678965ff-mws2p\" (UID: \"18ff8b6d-cea0-4eaa-933d-7c30b873d37c\") " pod="metallb-system/metallb-operator-webhook-server-6c678965ff-mws2p" Feb 24 09:21:11 crc kubenswrapper[4822]: I0224 09:21:11.435613 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfgvt\" (UniqueName: \"kubernetes.io/projected/18ff8b6d-cea0-4eaa-933d-7c30b873d37c-kube-api-access-kfgvt\") pod \"metallb-operator-webhook-server-6c678965ff-mws2p\" (UID: \"18ff8b6d-cea0-4eaa-933d-7c30b873d37c\") " pod="metallb-system/metallb-operator-webhook-server-6c678965ff-mws2p" Feb 24 09:21:11 crc kubenswrapper[4822]: I0224 09:21:11.442695 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/18ff8b6d-cea0-4eaa-933d-7c30b873d37c-webhook-cert\") pod \"metallb-operator-webhook-server-6c678965ff-mws2p\" (UID: \"18ff8b6d-cea0-4eaa-933d-7c30b873d37c\") " pod="metallb-system/metallb-operator-webhook-server-6c678965ff-mws2p" Feb 24 09:21:11 crc kubenswrapper[4822]: I0224 09:21:11.443592 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/18ff8b6d-cea0-4eaa-933d-7c30b873d37c-apiservice-cert\") pod \"metallb-operator-webhook-server-6c678965ff-mws2p\" (UID: \"18ff8b6d-cea0-4eaa-933d-7c30b873d37c\") " pod="metallb-system/metallb-operator-webhook-server-6c678965ff-mws2p" Feb 24 09:21:11 crc kubenswrapper[4822]: I0224 09:21:11.453555 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfgvt\" (UniqueName: \"kubernetes.io/projected/18ff8b6d-cea0-4eaa-933d-7c30b873d37c-kube-api-access-kfgvt\") pod \"metallb-operator-webhook-server-6c678965ff-mws2p\" (UID: \"18ff8b6d-cea0-4eaa-933d-7c30b873d37c\") " pod="metallb-system/metallb-operator-webhook-server-6c678965ff-mws2p" Feb 24 09:21:11 crc kubenswrapper[4822]: I0224 09:21:11.496812 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5cff6cb7c-2ww5h"] Feb 24 09:21:11 crc kubenswrapper[4822]: W0224 09:21:11.502021 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8c58f51_91b4_4a36_8795_85238805d159.slice/crio-91cbb1e7c2a3c06680ed28547e1bbfe84eac71dc87d6b1021ff318ebdef7cc6b WatchSource:0}: Error finding container 91cbb1e7c2a3c06680ed28547e1bbfe84eac71dc87d6b1021ff318ebdef7cc6b: Status 404 returned error can't find the container with id 91cbb1e7c2a3c06680ed28547e1bbfe84eac71dc87d6b1021ff318ebdef7cc6b Feb 24 09:21:11 crc kubenswrapper[4822]: I0224 09:21:11.610866 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6c678965ff-mws2p" Feb 24 09:21:11 crc kubenswrapper[4822]: I0224 09:21:11.811331 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5cff6cb7c-2ww5h" event={"ID":"e8c58f51-91b4-4a36-8795-85238805d159","Type":"ContainerStarted","Data":"91cbb1e7c2a3c06680ed28547e1bbfe84eac71dc87d6b1021ff318ebdef7cc6b"} Feb 24 09:21:11 crc kubenswrapper[4822]: I0224 09:21:11.904360 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6c678965ff-mws2p"] Feb 24 09:21:11 crc kubenswrapper[4822]: W0224 09:21:11.908243 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18ff8b6d_cea0_4eaa_933d_7c30b873d37c.slice/crio-95b60f63032c4c3f68adc17b89b96f96764ce0835baf62ec8ce3e5cb634e038c WatchSource:0}: Error finding container 95b60f63032c4c3f68adc17b89b96f96764ce0835baf62ec8ce3e5cb634e038c: Status 404 returned error can't find the container with id 95b60f63032c4c3f68adc17b89b96f96764ce0835baf62ec8ce3e5cb634e038c Feb 24 09:21:12 crc kubenswrapper[4822]: I0224 09:21:12.817826 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6c678965ff-mws2p" event={"ID":"18ff8b6d-cea0-4eaa-933d-7c30b873d37c","Type":"ContainerStarted","Data":"95b60f63032c4c3f68adc17b89b96f96764ce0835baf62ec8ce3e5cb634e038c"} Feb 24 09:21:14 crc kubenswrapper[4822]: I0224 09:21:14.842375 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5cff6cb7c-2ww5h" event={"ID":"e8c58f51-91b4-4a36-8795-85238805d159","Type":"ContainerStarted","Data":"35330b6a7a25b0c923e9da091601292ecd3eb42ff23ac2855bb6888ca9ffdf76"} Feb 24 09:21:14 crc kubenswrapper[4822]: I0224 09:21:14.843345 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5cff6cb7c-2ww5h" Feb 24 09:21:14 crc kubenswrapper[4822]: I0224 09:21:14.869456 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5cff6cb7c-2ww5h" podStartSLOduration=2.455172414 podStartE2EDuration="4.869438263s" podCreationTimestamp="2026-02-24 09:21:10 +0000 UTC" firstStartedPulling="2026-02-24 09:21:11.503937939 +0000 UTC m=+793.891700487" lastFinishedPulling="2026-02-24 09:21:13.918203778 +0000 UTC m=+796.305966336" observedRunningTime="2026-02-24 09:21:14.858635977 +0000 UTC m=+797.246398525" watchObservedRunningTime="2026-02-24 09:21:14.869438263 +0000 UTC m=+797.257200811" Feb 24 09:21:15 crc kubenswrapper[4822]: I0224 09:21:15.676590 4822 patch_prober.go:28] interesting pod/machine-config-daemon-qd752 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:21:15 crc kubenswrapper[4822]: I0224 09:21:15.676661 4822 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:21:16 crc kubenswrapper[4822]: I0224 09:21:16.867648 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6c678965ff-mws2p" event={"ID":"18ff8b6d-cea0-4eaa-933d-7c30b873d37c","Type":"ContainerStarted","Data":"37d194168ffc665da249c6c1c74fc1af799234a85846dfd8cf4c79d0d0d125ee"} Feb 24 09:21:16 crc kubenswrapper[4822]: I0224 09:21:16.884533 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6c678965ff-mws2p" podStartSLOduration=1.9946198019999999 podStartE2EDuration="5.88451916s" podCreationTimestamp="2026-02-24 09:21:11 +0000 UTC" firstStartedPulling="2026-02-24 09:21:11.91106963 +0000 UTC m=+794.298832188" lastFinishedPulling="2026-02-24 09:21:15.800968968 +0000 UTC m=+798.188731546" observedRunningTime="2026-02-24 09:21:16.884388916 +0000 UTC m=+799.272151464" watchObservedRunningTime="2026-02-24 09:21:16.88451916 +0000 UTC m=+799.272281698" Feb 24 09:21:17 crc kubenswrapper[4822]: I0224 09:21:17.881064 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6c678965ff-mws2p" Feb 24 09:21:31 crc kubenswrapper[4822]: I0224 09:21:31.616556 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6c678965ff-mws2p" Feb 24 09:21:36 crc kubenswrapper[4822]: I0224 09:21:36.562808 4822 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 24 09:21:45 crc kubenswrapper[4822]: I0224 09:21:45.676908 4822 patch_prober.go:28] interesting pod/machine-config-daemon-qd752 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:21:45 crc kubenswrapper[4822]: I0224 09:21:45.679033 4822 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:21:45 crc kubenswrapper[4822]: I0224 09:21:45.679338 4822 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd752" Feb 24 09:21:45 crc kubenswrapper[4822]: I0224 09:21:45.680411 4822 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f5f1eb1caf8f3fa53d2384fafa78a76cc1e2aaee0a945eb5b651032f65068caf"} pod="openshift-machine-config-operator/machine-config-daemon-qd752" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 09:21:45 crc kubenswrapper[4822]: I0224 09:21:45.680694 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" containerID="cri-o://f5f1eb1caf8f3fa53d2384fafa78a76cc1e2aaee0a945eb5b651032f65068caf" gracePeriod=600 Feb 24 09:21:46 crc kubenswrapper[4822]: I0224 09:21:46.065031 4822 generic.go:334] "Generic (PLEG): container finished" podID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerID="f5f1eb1caf8f3fa53d2384fafa78a76cc1e2aaee0a945eb5b651032f65068caf" exitCode=0 Feb 24 09:21:46 crc kubenswrapper[4822]: I0224 09:21:46.065103 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" event={"ID":"306aba52-0b6e-4d3f-b05f-757daebc5e24","Type":"ContainerDied","Data":"f5f1eb1caf8f3fa53d2384fafa78a76cc1e2aaee0a945eb5b651032f65068caf"} Feb 24 09:21:46 crc kubenswrapper[4822]: I0224 09:21:46.065316 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" event={"ID":"306aba52-0b6e-4d3f-b05f-757daebc5e24","Type":"ContainerStarted","Data":"9ed7a6b939504e2a46d5adbbb7de5c06f8baf234d28be557aa5a9d58954f225c"} Feb 24 09:21:46 crc kubenswrapper[4822]: I0224 09:21:46.065337 4822 scope.go:117] "RemoveContainer" containerID="6558fef3da8f529e43059f424d69ba37b66c4e03bf96f9d22590f42fc65711b2" Feb 24 09:21:51 crc kubenswrapper[4822]: I0224 09:21:51.244372 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5cff6cb7c-2ww5h" Feb 24 09:21:51 crc kubenswrapper[4822]: I0224 09:21:51.996346 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-k5l8z"] Feb 24 09:21:51 crc kubenswrapper[4822]: I0224 09:21:51.999819 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-k5l8z" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.002110 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.002283 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-h9b25"] Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.002504 4822 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.003046 4822 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-qndw6" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.003616 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-h9b25" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.005865 4822 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.015981 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-h9b25"] Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.047907 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ad5c2011-3f6e-442c-8aff-341533371121-reloader\") pod \"frr-k8s-k5l8z\" (UID: \"ad5c2011-3f6e-442c-8aff-341533371121\") " pod="metallb-system/frr-k8s-k5l8z" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.047999 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad5c2011-3f6e-442c-8aff-341533371121-metrics-certs\") pod \"frr-k8s-k5l8z\" (UID: \"ad5c2011-3f6e-442c-8aff-341533371121\") " pod="metallb-system/frr-k8s-k5l8z" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.048058 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7a7eca30-d23a-42ca-b0a4-7da1bf69f790-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-h9b25\" (UID: \"7a7eca30-d23a-42ca-b0a4-7da1bf69f790\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-h9b25" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.048083 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcxqg\" (UniqueName: \"kubernetes.io/projected/ad5c2011-3f6e-442c-8aff-341533371121-kube-api-access-bcxqg\") pod \"frr-k8s-k5l8z\" (UID: \"ad5c2011-3f6e-442c-8aff-341533371121\") " pod="metallb-system/frr-k8s-k5l8z" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.048113 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ad5c2011-3f6e-442c-8aff-341533371121-frr-conf\") pod \"frr-k8s-k5l8z\" (UID: \"ad5c2011-3f6e-442c-8aff-341533371121\") " pod="metallb-system/frr-k8s-k5l8z" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.048152 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ad5c2011-3f6e-442c-8aff-341533371121-frr-sockets\") pod \"frr-k8s-k5l8z\" (UID: \"ad5c2011-3f6e-442c-8aff-341533371121\") " pod="metallb-system/frr-k8s-k5l8z" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.048170 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ad5c2011-3f6e-442c-8aff-341533371121-frr-startup\") pod \"frr-k8s-k5l8z\" (UID: \"ad5c2011-3f6e-442c-8aff-341533371121\") " pod="metallb-system/frr-k8s-k5l8z" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.048293 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ad5c2011-3f6e-442c-8aff-341533371121-metrics\") pod \"frr-k8s-k5l8z\" (UID: \"ad5c2011-3f6e-442c-8aff-341533371121\") " pod="metallb-system/frr-k8s-k5l8z" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.048404 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkpg7\" (UniqueName: \"kubernetes.io/projected/7a7eca30-d23a-42ca-b0a4-7da1bf69f790-kube-api-access-tkpg7\") pod \"frr-k8s-webhook-server-78b44bf5bb-h9b25\" (UID: \"7a7eca30-d23a-42ca-b0a4-7da1bf69f790\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-h9b25" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.076901 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-6d67q"] Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.077716 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-6d67q" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.081814 4822 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-c7vn9" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.082796 4822 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.083076 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.083130 4822 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.085831 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-v4s72"] Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.087510 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-v4s72" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.088902 4822 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.094502 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-v4s72"] Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.149507 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkpg7\" (UniqueName: \"kubernetes.io/projected/7a7eca30-d23a-42ca-b0a4-7da1bf69f790-kube-api-access-tkpg7\") pod \"frr-k8s-webhook-server-78b44bf5bb-h9b25\" (UID: \"7a7eca30-d23a-42ca-b0a4-7da1bf69f790\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-h9b25" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.149567 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ad5c2011-3f6e-442c-8aff-341533371121-reloader\") pod \"frr-k8s-k5l8z\" (UID: \"ad5c2011-3f6e-442c-8aff-341533371121\") " pod="metallb-system/frr-k8s-k5l8z" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.149604 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df430a37-d7e1-4321-8643-98672e65f889-metrics-certs\") pod \"controller-69bbfbf88f-v4s72\" (UID: \"df430a37-d7e1-4321-8643-98672e65f889\") " pod="metallb-system/controller-69bbfbf88f-v4s72" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.149631 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad5c2011-3f6e-442c-8aff-341533371121-metrics-certs\") pod \"frr-k8s-k5l8z\" (UID: \"ad5c2011-3f6e-442c-8aff-341533371121\") " pod="metallb-system/frr-k8s-k5l8z" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.149669 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/fd811514-2b98-4ec6-b9d7-262bf3c63c9e-metallb-excludel2\") pod \"speaker-6d67q\" (UID: \"fd811514-2b98-4ec6-b9d7-262bf3c63c9e\") " pod="metallb-system/speaker-6d67q" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.149693 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7a7eca30-d23a-42ca-b0a4-7da1bf69f790-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-h9b25\" (UID: \"7a7eca30-d23a-42ca-b0a4-7da1bf69f790\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-h9b25" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.149716 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcxqg\" (UniqueName: \"kubernetes.io/projected/ad5c2011-3f6e-442c-8aff-341533371121-kube-api-access-bcxqg\") pod \"frr-k8s-k5l8z\" (UID: \"ad5c2011-3f6e-442c-8aff-341533371121\") " pod="metallb-system/frr-k8s-k5l8z" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.149746 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fd811514-2b98-4ec6-b9d7-262bf3c63c9e-memberlist\") pod \"speaker-6d67q\" (UID: \"fd811514-2b98-4ec6-b9d7-262bf3c63c9e\") " pod="metallb-system/speaker-6d67q" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.149770 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ad5c2011-3f6e-442c-8aff-341533371121-frr-conf\") pod \"frr-k8s-k5l8z\" (UID: \"ad5c2011-3f6e-442c-8aff-341533371121\") " pod="metallb-system/frr-k8s-k5l8z" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.149802 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8ctz\" (UniqueName: \"kubernetes.io/projected/df430a37-d7e1-4321-8643-98672e65f889-kube-api-access-c8ctz\") pod \"controller-69bbfbf88f-v4s72\" (UID: \"df430a37-d7e1-4321-8643-98672e65f889\") " pod="metallb-system/controller-69bbfbf88f-v4s72" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.149833 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ad5c2011-3f6e-442c-8aff-341533371121-frr-sockets\") pod \"frr-k8s-k5l8z\" (UID: \"ad5c2011-3f6e-442c-8aff-341533371121\") " pod="metallb-system/frr-k8s-k5l8z" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.149852 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ad5c2011-3f6e-442c-8aff-341533371121-frr-startup\") pod \"frr-k8s-k5l8z\" (UID: \"ad5c2011-3f6e-442c-8aff-341533371121\") " pod="metallb-system/frr-k8s-k5l8z" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.149885 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8z6q\" (UniqueName: \"kubernetes.io/projected/fd811514-2b98-4ec6-b9d7-262bf3c63c9e-kube-api-access-c8z6q\") pod \"speaker-6d67q\" (UID: \"fd811514-2b98-4ec6-b9d7-262bf3c63c9e\") " pod="metallb-system/speaker-6d67q" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.149925 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fd811514-2b98-4ec6-b9d7-262bf3c63c9e-metrics-certs\") pod \"speaker-6d67q\" (UID: \"fd811514-2b98-4ec6-b9d7-262bf3c63c9e\") " pod="metallb-system/speaker-6d67q" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.149949 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df430a37-d7e1-4321-8643-98672e65f889-cert\") pod \"controller-69bbfbf88f-v4s72\" (UID: \"df430a37-d7e1-4321-8643-98672e65f889\") " pod="metallb-system/controller-69bbfbf88f-v4s72" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.149985 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ad5c2011-3f6e-442c-8aff-341533371121-metrics\") pod \"frr-k8s-k5l8z\" (UID: \"ad5c2011-3f6e-442c-8aff-341533371121\") " pod="metallb-system/frr-k8s-k5l8z" Feb 24 09:21:52 crc kubenswrapper[4822]: E0224 09:21:52.150174 4822 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Feb 24 09:21:52 crc kubenswrapper[4822]: E0224 09:21:52.150249 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a7eca30-d23a-42ca-b0a4-7da1bf69f790-cert podName:7a7eca30-d23a-42ca-b0a4-7da1bf69f790 nodeName:}" failed. No retries permitted until 2026-02-24 09:21:52.650229258 +0000 UTC m=+835.037991806 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7a7eca30-d23a-42ca-b0a4-7da1bf69f790-cert") pod "frr-k8s-webhook-server-78b44bf5bb-h9b25" (UID: "7a7eca30-d23a-42ca-b0a4-7da1bf69f790") : secret "frr-k8s-webhook-server-cert" not found Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.150781 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ad5c2011-3f6e-442c-8aff-341533371121-frr-conf\") pod \"frr-k8s-k5l8z\" (UID: \"ad5c2011-3f6e-442c-8aff-341533371121\") " pod="metallb-system/frr-k8s-k5l8z" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.150803 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ad5c2011-3f6e-442c-8aff-341533371121-reloader\") pod \"frr-k8s-k5l8z\" (UID: \"ad5c2011-3f6e-442c-8aff-341533371121\") " pod="metallb-system/frr-k8s-k5l8z" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.150994 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ad5c2011-3f6e-442c-8aff-341533371121-metrics\") pod \"frr-k8s-k5l8z\" (UID: \"ad5c2011-3f6e-442c-8aff-341533371121\") " pod="metallb-system/frr-k8s-k5l8z" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.151199 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ad5c2011-3f6e-442c-8aff-341533371121-frr-sockets\") pod \"frr-k8s-k5l8z\" (UID: \"ad5c2011-3f6e-442c-8aff-341533371121\") " pod="metallb-system/frr-k8s-k5l8z" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.151300 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ad5c2011-3f6e-442c-8aff-341533371121-frr-startup\") pod \"frr-k8s-k5l8z\" (UID: \"ad5c2011-3f6e-442c-8aff-341533371121\") " pod="metallb-system/frr-k8s-k5l8z" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.166032 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcxqg\" (UniqueName: \"kubernetes.io/projected/ad5c2011-3f6e-442c-8aff-341533371121-kube-api-access-bcxqg\") pod \"frr-k8s-k5l8z\" (UID: \"ad5c2011-3f6e-442c-8aff-341533371121\") " pod="metallb-system/frr-k8s-k5l8z" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.168320 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkpg7\" (UniqueName: \"kubernetes.io/projected/7a7eca30-d23a-42ca-b0a4-7da1bf69f790-kube-api-access-tkpg7\") pod \"frr-k8s-webhook-server-78b44bf5bb-h9b25\" (UID: \"7a7eca30-d23a-42ca-b0a4-7da1bf69f790\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-h9b25" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.169658 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad5c2011-3f6e-442c-8aff-341533371121-metrics-certs\") pod \"frr-k8s-k5l8z\" (UID: \"ad5c2011-3f6e-442c-8aff-341533371121\") " pod="metallb-system/frr-k8s-k5l8z" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.251303 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df430a37-d7e1-4321-8643-98672e65f889-metrics-certs\") pod \"controller-69bbfbf88f-v4s72\" (UID: \"df430a37-d7e1-4321-8643-98672e65f889\") " pod="metallb-system/controller-69bbfbf88f-v4s72" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.251399 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/fd811514-2b98-4ec6-b9d7-262bf3c63c9e-metallb-excludel2\") pod \"speaker-6d67q\" (UID: \"fd811514-2b98-4ec6-b9d7-262bf3c63c9e\") " pod="metallb-system/speaker-6d67q" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.251444 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fd811514-2b98-4ec6-b9d7-262bf3c63c9e-memberlist\") pod \"speaker-6d67q\" (UID: \"fd811514-2b98-4ec6-b9d7-262bf3c63c9e\") " pod="metallb-system/speaker-6d67q" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.251467 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8ctz\" (UniqueName: \"kubernetes.io/projected/df430a37-d7e1-4321-8643-98672e65f889-kube-api-access-c8ctz\") pod \"controller-69bbfbf88f-v4s72\" (UID: \"df430a37-d7e1-4321-8643-98672e65f889\") " pod="metallb-system/controller-69bbfbf88f-v4s72" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.251492 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8z6q\" (UniqueName: \"kubernetes.io/projected/fd811514-2b98-4ec6-b9d7-262bf3c63c9e-kube-api-access-c8z6q\") pod \"speaker-6d67q\" (UID: \"fd811514-2b98-4ec6-b9d7-262bf3c63c9e\") " pod="metallb-system/speaker-6d67q" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.251506 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fd811514-2b98-4ec6-b9d7-262bf3c63c9e-metrics-certs\") pod \"speaker-6d67q\" (UID: \"fd811514-2b98-4ec6-b9d7-262bf3c63c9e\") " pod="metallb-system/speaker-6d67q" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.251520 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df430a37-d7e1-4321-8643-98672e65f889-cert\") pod \"controller-69bbfbf88f-v4s72\" (UID: \"df430a37-d7e1-4321-8643-98672e65f889\") " pod="metallb-system/controller-69bbfbf88f-v4s72" Feb 24 09:21:52 crc kubenswrapper[4822]: E0224 09:21:52.251670 4822 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 24 09:21:52 crc kubenswrapper[4822]: E0224 09:21:52.251758 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd811514-2b98-4ec6-b9d7-262bf3c63c9e-memberlist podName:fd811514-2b98-4ec6-b9d7-262bf3c63c9e nodeName:}" failed. No retries permitted until 2026-02-24 09:21:52.75173175 +0000 UTC m=+835.139494318 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/fd811514-2b98-4ec6-b9d7-262bf3c63c9e-memberlist") pod "speaker-6d67q" (UID: "fd811514-2b98-4ec6-b9d7-262bf3c63c9e") : secret "metallb-memberlist" not found Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.252192 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/fd811514-2b98-4ec6-b9d7-262bf3c63c9e-metallb-excludel2\") pod \"speaker-6d67q\" (UID: \"fd811514-2b98-4ec6-b9d7-262bf3c63c9e\") " pod="metallb-system/speaker-6d67q" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.255041 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df430a37-d7e1-4321-8643-98672e65f889-metrics-certs\") pod \"controller-69bbfbf88f-v4s72\" (UID: \"df430a37-d7e1-4321-8643-98672e65f889\") " pod="metallb-system/controller-69bbfbf88f-v4s72" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.255204 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fd811514-2b98-4ec6-b9d7-262bf3c63c9e-metrics-certs\") pod \"speaker-6d67q\" (UID: \"fd811514-2b98-4ec6-b9d7-262bf3c63c9e\") " pod="metallb-system/speaker-6d67q" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.263175 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df430a37-d7e1-4321-8643-98672e65f889-cert\") pod \"controller-69bbfbf88f-v4s72\" (UID: \"df430a37-d7e1-4321-8643-98672e65f889\") " pod="metallb-system/controller-69bbfbf88f-v4s72" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.270725 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8ctz\" (UniqueName: \"kubernetes.io/projected/df430a37-d7e1-4321-8643-98672e65f889-kube-api-access-c8ctz\") pod \"controller-69bbfbf88f-v4s72\" (UID: \"df430a37-d7e1-4321-8643-98672e65f889\") " pod="metallb-system/controller-69bbfbf88f-v4s72" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.272844 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8z6q\" (UniqueName: \"kubernetes.io/projected/fd811514-2b98-4ec6-b9d7-262bf3c63c9e-kube-api-access-c8z6q\") pod \"speaker-6d67q\" (UID: \"fd811514-2b98-4ec6-b9d7-262bf3c63c9e\") " pod="metallb-system/speaker-6d67q" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.323796 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-k5l8z" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.400554 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-v4s72" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.656670 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7a7eca30-d23a-42ca-b0a4-7da1bf69f790-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-h9b25\" (UID: \"7a7eca30-d23a-42ca-b0a4-7da1bf69f790\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-h9b25" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.662955 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7a7eca30-d23a-42ca-b0a4-7da1bf69f790-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-h9b25\" (UID: \"7a7eca30-d23a-42ca-b0a4-7da1bf69f790\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-h9b25" Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.758327 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fd811514-2b98-4ec6-b9d7-262bf3c63c9e-memberlist\") pod \"speaker-6d67q\" (UID: \"fd811514-2b98-4ec6-b9d7-262bf3c63c9e\") " pod="metallb-system/speaker-6d67q" Feb 24 09:21:52 crc kubenswrapper[4822]: E0224 09:21:52.758594 4822 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 24 09:21:52 crc kubenswrapper[4822]: E0224 09:21:52.758726 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd811514-2b98-4ec6-b9d7-262bf3c63c9e-memberlist podName:fd811514-2b98-4ec6-b9d7-262bf3c63c9e nodeName:}" failed. No retries permitted until 2026-02-24 09:21:53.758691976 +0000 UTC m=+836.146454564 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/fd811514-2b98-4ec6-b9d7-262bf3c63c9e-memberlist") pod "speaker-6d67q" (UID: "fd811514-2b98-4ec6-b9d7-262bf3c63c9e") : secret "metallb-memberlist" not found Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.833090 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-v4s72"] Feb 24 09:21:52 crc kubenswrapper[4822]: W0224 09:21:52.841870 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf430a37_d7e1_4321_8643_98672e65f889.slice/crio-decc150475a3a11912ea2444f9e9e96327ef768551c98f02f10a2a04cd010351 WatchSource:0}: Error finding container decc150475a3a11912ea2444f9e9e96327ef768551c98f02f10a2a04cd010351: Status 404 returned error can't find the container with id decc150475a3a11912ea2444f9e9e96327ef768551c98f02f10a2a04cd010351 Feb 24 09:21:52 crc kubenswrapper[4822]: I0224 09:21:52.943530 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-h9b25" Feb 24 09:21:53 crc kubenswrapper[4822]: I0224 09:21:53.135406 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-v4s72" event={"ID":"df430a37-d7e1-4321-8643-98672e65f889","Type":"ContainerStarted","Data":"b65a80946ff8b81cd55a9332a4399c02b81984a950609ea2f79d9d0e371d071f"} Feb 24 09:21:53 crc kubenswrapper[4822]: I0224 09:21:53.135600 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-v4s72" event={"ID":"df430a37-d7e1-4321-8643-98672e65f889","Type":"ContainerStarted","Data":"decc150475a3a11912ea2444f9e9e96327ef768551c98f02f10a2a04cd010351"} Feb 24 09:21:53 crc kubenswrapper[4822]: I0224 09:21:53.137471 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k5l8z" event={"ID":"ad5c2011-3f6e-442c-8aff-341533371121","Type":"ContainerStarted","Data":"cd4f5d16f468891300a4af50ee087bd95131517856d517d2e151f659f1950f13"} Feb 24 09:21:53 crc kubenswrapper[4822]: I0224 09:21:53.469278 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-h9b25"] Feb 24 09:21:53 crc kubenswrapper[4822]: I0224 09:21:53.773794 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fd811514-2b98-4ec6-b9d7-262bf3c63c9e-memberlist\") pod \"speaker-6d67q\" (UID: \"fd811514-2b98-4ec6-b9d7-262bf3c63c9e\") " pod="metallb-system/speaker-6d67q" Feb 24 09:21:53 crc kubenswrapper[4822]: I0224 09:21:53.782933 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fd811514-2b98-4ec6-b9d7-262bf3c63c9e-memberlist\") pod \"speaker-6d67q\" (UID: \"fd811514-2b98-4ec6-b9d7-262bf3c63c9e\") " pod="metallb-system/speaker-6d67q" Feb 24 09:21:53 crc kubenswrapper[4822]: I0224 09:21:53.891265 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-6d67q" Feb 24 09:21:53 crc kubenswrapper[4822]: W0224 09:21:53.917443 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd811514_2b98_4ec6_b9d7_262bf3c63c9e.slice/crio-0b2cbc13fcb76bb626b24a1baef610127e54ef326a9eecece1cdc8332e7bc0d1 WatchSource:0}: Error finding container 0b2cbc13fcb76bb626b24a1baef610127e54ef326a9eecece1cdc8332e7bc0d1: Status 404 returned error can't find the container with id 0b2cbc13fcb76bb626b24a1baef610127e54ef326a9eecece1cdc8332e7bc0d1 Feb 24 09:21:54 crc kubenswrapper[4822]: I0224 09:21:54.143778 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-v4s72" event={"ID":"df430a37-d7e1-4321-8643-98672e65f889","Type":"ContainerStarted","Data":"d4111e22e85fdf6dd1859c642851211bc7de2113807eb0ec6a249bcbd2ea6894"} Feb 24 09:21:54 crc kubenswrapper[4822]: I0224 09:21:54.145056 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-v4s72" Feb 24 09:21:54 crc kubenswrapper[4822]: I0224 09:21:54.149596 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-h9b25" event={"ID":"7a7eca30-d23a-42ca-b0a4-7da1bf69f790","Type":"ContainerStarted","Data":"d7901db7c139f17fe274b40fa1bff669e229f885d884d5e0c05f363af5504e7b"} Feb 24 09:21:54 crc kubenswrapper[4822]: I0224 09:21:54.154145 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6d67q" event={"ID":"fd811514-2b98-4ec6-b9d7-262bf3c63c9e","Type":"ContainerStarted","Data":"0b2cbc13fcb76bb626b24a1baef610127e54ef326a9eecece1cdc8332e7bc0d1"} Feb 24 09:21:55 crc kubenswrapper[4822]: I0224 09:21:55.168268 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6d67q" event={"ID":"fd811514-2b98-4ec6-b9d7-262bf3c63c9e","Type":"ContainerStarted","Data":"5ee46057df3621b85e1a68ad0d6d28e3cad25fad85e58ab9f96c26c45b5e52a6"} Feb 24 09:21:55 crc kubenswrapper[4822]: I0224 09:21:55.168544 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6d67q" event={"ID":"fd811514-2b98-4ec6-b9d7-262bf3c63c9e","Type":"ContainerStarted","Data":"30b94a3fca6b1022d2aca24bbecebf523e14ae752707e86765f0f2a703a74690"} Feb 24 09:21:55 crc kubenswrapper[4822]: I0224 09:21:55.168573 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-6d67q" Feb 24 09:21:55 crc kubenswrapper[4822]: I0224 09:21:55.182312 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-6d67q" podStartSLOduration=3.182293411 podStartE2EDuration="3.182293411s" podCreationTimestamp="2026-02-24 09:21:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:21:55.180377938 +0000 UTC m=+837.568140486" watchObservedRunningTime="2026-02-24 09:21:55.182293411 +0000 UTC m=+837.570055959" Feb 24 09:21:55 crc kubenswrapper[4822]: I0224 09:21:55.183003 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-v4s72" podStartSLOduration=3.18299968 podStartE2EDuration="3.18299968s" podCreationTimestamp="2026-02-24 09:21:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:21:54.169994633 +0000 UTC m=+836.557757181" watchObservedRunningTime="2026-02-24 09:21:55.18299968 +0000 UTC m=+837.570762228" Feb 24 09:21:59 crc kubenswrapper[4822]: I0224 09:21:59.201836 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-h9b25" event={"ID":"7a7eca30-d23a-42ca-b0a4-7da1bf69f790","Type":"ContainerStarted","Data":"23603c1513f17e90656ad625fff1d737219915041db90d8822ef7c65ea05bf62"} Feb 24 09:21:59 crc kubenswrapper[4822]: I0224 09:21:59.203776 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-h9b25" Feb 24 09:21:59 crc kubenswrapper[4822]: I0224 09:21:59.203652 4822 generic.go:334] "Generic (PLEG): container finished" podID="ad5c2011-3f6e-442c-8aff-341533371121" containerID="4e173efdb862e37b1cc9cd8f1b55066c4b24564d73c49b316e66110ec6ba625a" exitCode=0 Feb 24 09:21:59 crc kubenswrapper[4822]: I0224 09:21:59.203811 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k5l8z" event={"ID":"ad5c2011-3f6e-442c-8aff-341533371121","Type":"ContainerDied","Data":"4e173efdb862e37b1cc9cd8f1b55066c4b24564d73c49b316e66110ec6ba625a"} Feb 24 09:21:59 crc kubenswrapper[4822]: I0224 09:21:59.235193 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-h9b25" podStartSLOduration=2.7832038690000003 podStartE2EDuration="8.235168415s" podCreationTimestamp="2026-02-24 09:21:51 +0000 UTC" firstStartedPulling="2026-02-24 09:21:53.472288488 +0000 UTC m=+835.860051036" lastFinishedPulling="2026-02-24 09:21:58.924253044 +0000 UTC m=+841.312015582" observedRunningTime="2026-02-24 09:21:59.226719576 +0000 UTC m=+841.614482154" watchObservedRunningTime="2026-02-24 09:21:59.235168415 +0000 UTC m=+841.622931003" Feb 24 09:21:59 crc kubenswrapper[4822]: E0224 09:21:59.437499 4822 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad5c2011_3f6e_442c_8aff_341533371121.slice/crio-0d6c9ae9d1de0c56285476cb911bea91f48a9946811de997ac44f0456180577e.scope\": RecentStats: unable to find data in memory cache]" Feb 24 09:22:00 crc kubenswrapper[4822]: I0224 09:22:00.216388 4822 generic.go:334] "Generic (PLEG): container finished" podID="ad5c2011-3f6e-442c-8aff-341533371121" containerID="0d6c9ae9d1de0c56285476cb911bea91f48a9946811de997ac44f0456180577e" exitCode=0 Feb 24 09:22:00 crc kubenswrapper[4822]: I0224 09:22:00.216493 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k5l8z" event={"ID":"ad5c2011-3f6e-442c-8aff-341533371121","Type":"ContainerDied","Data":"0d6c9ae9d1de0c56285476cb911bea91f48a9946811de997ac44f0456180577e"} Feb 24 09:22:01 crc kubenswrapper[4822]: I0224 09:22:01.227857 4822 generic.go:334] "Generic (PLEG): container finished" podID="ad5c2011-3f6e-442c-8aff-341533371121" containerID="7388a712c40395a7453c00d8c316dc326be1964109e7c9d17875bef7f18198e7" exitCode=0 Feb 24 09:22:01 crc kubenswrapper[4822]: I0224 09:22:01.227943 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k5l8z" event={"ID":"ad5c2011-3f6e-442c-8aff-341533371121","Type":"ContainerDied","Data":"7388a712c40395a7453c00d8c316dc326be1964109e7c9d17875bef7f18198e7"} Feb 24 09:22:02 crc kubenswrapper[4822]: I0224 09:22:02.243218 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k5l8z" event={"ID":"ad5c2011-3f6e-442c-8aff-341533371121","Type":"ContainerStarted","Data":"8ea74db592bf304d7e64b9c8aa5ff7fb5625be022883e201e5d786386442c033"} Feb 24 09:22:02 crc kubenswrapper[4822]: I0224 09:22:02.243574 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k5l8z" event={"ID":"ad5c2011-3f6e-442c-8aff-341533371121","Type":"ContainerStarted","Data":"038e26a2583bd0e62e9b07c14ac454306e29072b1a2184aa2e9ddda6080a4eb1"} Feb 24 09:22:02 crc kubenswrapper[4822]: I0224 09:22:02.243588 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k5l8z" event={"ID":"ad5c2011-3f6e-442c-8aff-341533371121","Type":"ContainerStarted","Data":"fa191d0828029dacbf969df687efc18fe1df8965823cd1e50e23f144dd74dd93"} Feb 24 09:22:02 crc kubenswrapper[4822]: I0224 09:22:02.243603 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k5l8z" event={"ID":"ad5c2011-3f6e-442c-8aff-341533371121","Type":"ContainerStarted","Data":"fbc9f1fa3d4ac3d14d7ea63c76ee06964d3bfa41ed4fda36118f19566203c033"} Feb 24 09:22:02 crc kubenswrapper[4822]: I0224 09:22:02.243634 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k5l8z" event={"ID":"ad5c2011-3f6e-442c-8aff-341533371121","Type":"ContainerStarted","Data":"cc43e50cde42c071e1feec4cc7027528c14bf57ec50a1da8b5490d2ed4178620"} Feb 24 09:22:02 crc kubenswrapper[4822]: I0224 09:22:02.408547 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-v4s72" Feb 24 09:22:03 crc kubenswrapper[4822]: I0224 09:22:03.261565 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k5l8z" event={"ID":"ad5c2011-3f6e-442c-8aff-341533371121","Type":"ContainerStarted","Data":"0d0d9e5c02b92537149fce3a2d1ad7097ec3887fa60c31f256d960977f1d3f96"} Feb 24 09:22:03 crc kubenswrapper[4822]: I0224 09:22:03.262162 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-k5l8z" Feb 24 09:22:03 crc kubenswrapper[4822]: I0224 09:22:03.304290 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-k5l8z" podStartSLOduration=5.828680351 podStartE2EDuration="12.304264167s" podCreationTimestamp="2026-02-24 09:21:51 +0000 UTC" firstStartedPulling="2026-02-24 09:21:52.424047313 +0000 UTC m=+834.811809861" lastFinishedPulling="2026-02-24 09:21:58.899631129 +0000 UTC m=+841.287393677" observedRunningTime="2026-02-24 09:22:03.294885636 +0000 UTC m=+845.682648224" watchObservedRunningTime="2026-02-24 09:22:03.304264167 +0000 UTC m=+845.692026755" Feb 24 09:22:07 crc kubenswrapper[4822]: I0224 09:22:07.324556 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-k5l8z" Feb 24 09:22:07 crc kubenswrapper[4822]: I0224 09:22:07.395400 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-k5l8z" Feb 24 09:22:12 crc kubenswrapper[4822]: I0224 09:22:12.331375 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-k5l8z" Feb 24 09:22:12 crc kubenswrapper[4822]: I0224 09:22:12.956747 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-h9b25" Feb 24 09:22:13 crc kubenswrapper[4822]: I0224 09:22:13.896728 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-6d67q" Feb 24 09:22:16 crc kubenswrapper[4822]: I0224 09:22:16.778778 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-b6vn4"] Feb 24 09:22:16 crc kubenswrapper[4822]: I0224 09:22:16.780393 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-b6vn4" Feb 24 09:22:16 crc kubenswrapper[4822]: I0224 09:22:16.782365 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-fzg6t" Feb 24 09:22:16 crc kubenswrapper[4822]: I0224 09:22:16.782744 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 24 09:22:16 crc kubenswrapper[4822]: I0224 09:22:16.783521 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 24 09:22:16 crc kubenswrapper[4822]: I0224 09:22:16.791424 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-b6vn4"] Feb 24 09:22:16 crc kubenswrapper[4822]: I0224 09:22:16.936662 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfqtr\" (UniqueName: \"kubernetes.io/projected/7dc25efe-0901-4ed2-a09b-d6e7a5fde7df-kube-api-access-rfqtr\") pod \"openstack-operator-index-b6vn4\" (UID: \"7dc25efe-0901-4ed2-a09b-d6e7a5fde7df\") " pod="openstack-operators/openstack-operator-index-b6vn4" Feb 24 09:22:17 crc kubenswrapper[4822]: I0224 09:22:17.039796 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfqtr\" (UniqueName: \"kubernetes.io/projected/7dc25efe-0901-4ed2-a09b-d6e7a5fde7df-kube-api-access-rfqtr\") pod \"openstack-operator-index-b6vn4\" (UID: \"7dc25efe-0901-4ed2-a09b-d6e7a5fde7df\") " pod="openstack-operators/openstack-operator-index-b6vn4" Feb 24 09:22:17 crc kubenswrapper[4822]: I0224 09:22:17.057755 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfqtr\" (UniqueName: \"kubernetes.io/projected/7dc25efe-0901-4ed2-a09b-d6e7a5fde7df-kube-api-access-rfqtr\") pod \"openstack-operator-index-b6vn4\" (UID: \"7dc25efe-0901-4ed2-a09b-d6e7a5fde7df\") " pod="openstack-operators/openstack-operator-index-b6vn4" Feb 24 09:22:17 crc kubenswrapper[4822]: I0224 09:22:17.103824 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-b6vn4" Feb 24 09:22:17 crc kubenswrapper[4822]: I0224 09:22:17.396213 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-b6vn4"] Feb 24 09:22:17 crc kubenswrapper[4822]: W0224 09:22:17.415862 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7dc25efe_0901_4ed2_a09b_d6e7a5fde7df.slice/crio-d0a670d7639e1e874a9d382d8c2940c7a297ac605beb6044e3c167f80768b643 WatchSource:0}: Error finding container d0a670d7639e1e874a9d382d8c2940c7a297ac605beb6044e3c167f80768b643: Status 404 returned error can't find the container with id d0a670d7639e1e874a9d382d8c2940c7a297ac605beb6044e3c167f80768b643 Feb 24 09:22:18 crc kubenswrapper[4822]: I0224 09:22:18.387857 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-b6vn4" event={"ID":"7dc25efe-0901-4ed2-a09b-d6e7a5fde7df","Type":"ContainerStarted","Data":"d0a670d7639e1e874a9d382d8c2940c7a297ac605beb6044e3c167f80768b643"} Feb 24 09:22:19 crc kubenswrapper[4822]: I0224 09:22:19.319015 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-b6vn4"] Feb 24 09:22:19 crc kubenswrapper[4822]: I0224 09:22:19.924800 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-x4ksn"] Feb 24 09:22:19 crc kubenswrapper[4822]: I0224 09:22:19.926422 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x4ksn" Feb 24 09:22:19 crc kubenswrapper[4822]: I0224 09:22:19.935794 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-x4ksn"] Feb 24 09:22:20 crc kubenswrapper[4822]: I0224 09:22:20.110411 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ctxb\" (UniqueName: \"kubernetes.io/projected/bd34b663-65b1-416a-a677-0e32cee456b1-kube-api-access-2ctxb\") pod \"openstack-operator-index-x4ksn\" (UID: \"bd34b663-65b1-416a-a677-0e32cee456b1\") " pod="openstack-operators/openstack-operator-index-x4ksn" Feb 24 09:22:20 crc kubenswrapper[4822]: I0224 09:22:20.212204 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ctxb\" (UniqueName: \"kubernetes.io/projected/bd34b663-65b1-416a-a677-0e32cee456b1-kube-api-access-2ctxb\") pod \"openstack-operator-index-x4ksn\" (UID: \"bd34b663-65b1-416a-a677-0e32cee456b1\") " pod="openstack-operators/openstack-operator-index-x4ksn" Feb 24 09:22:20 crc kubenswrapper[4822]: I0224 09:22:20.238708 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ctxb\" (UniqueName: \"kubernetes.io/projected/bd34b663-65b1-416a-a677-0e32cee456b1-kube-api-access-2ctxb\") pod \"openstack-operator-index-x4ksn\" (UID: \"bd34b663-65b1-416a-a677-0e32cee456b1\") " pod="openstack-operators/openstack-operator-index-x4ksn" Feb 24 09:22:20 crc kubenswrapper[4822]: I0224 09:22:20.248591 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x4ksn" Feb 24 09:22:20 crc kubenswrapper[4822]: I0224 09:22:20.403341 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-b6vn4" event={"ID":"7dc25efe-0901-4ed2-a09b-d6e7a5fde7df","Type":"ContainerStarted","Data":"72cdaa24335e320af2a933c6d92164c31934c8a25d00bb8c1e4fae0d95baeb79"} Feb 24 09:22:20 crc kubenswrapper[4822]: I0224 09:22:20.403483 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-b6vn4" podUID="7dc25efe-0901-4ed2-a09b-d6e7a5fde7df" containerName="registry-server" containerID="cri-o://72cdaa24335e320af2a933c6d92164c31934c8a25d00bb8c1e4fae0d95baeb79" gracePeriod=2 Feb 24 09:22:20 crc kubenswrapper[4822]: I0224 09:22:20.438633 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-b6vn4" podStartSLOduration=1.945626546 podStartE2EDuration="4.438602637s" podCreationTimestamp="2026-02-24 09:22:16 +0000 UTC" firstStartedPulling="2026-02-24 09:22:17.421631031 +0000 UTC m=+859.809393569" lastFinishedPulling="2026-02-24 09:22:19.914607102 +0000 UTC m=+862.302369660" observedRunningTime="2026-02-24 09:22:20.430898435 +0000 UTC m=+862.818660983" watchObservedRunningTime="2026-02-24 09:22:20.438602637 +0000 UTC m=+862.826365185" Feb 24 09:22:20 crc kubenswrapper[4822]: I0224 09:22:20.530244 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-x4ksn"] Feb 24 09:22:20 crc kubenswrapper[4822]: W0224 09:22:20.543673 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd34b663_65b1_416a_a677_0e32cee456b1.slice/crio-43a98bbafe03737620514a7eeec9b81f1ad60b640557cd8e586f1da017110916 WatchSource:0}: Error finding container 43a98bbafe03737620514a7eeec9b81f1ad60b640557cd8e586f1da017110916: Status 404 returned error can't find the container with id 43a98bbafe03737620514a7eeec9b81f1ad60b640557cd8e586f1da017110916 Feb 24 09:22:20 crc kubenswrapper[4822]: I0224 09:22:20.758291 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-b6vn4" Feb 24 09:22:20 crc kubenswrapper[4822]: I0224 09:22:20.925969 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfqtr\" (UniqueName: \"kubernetes.io/projected/7dc25efe-0901-4ed2-a09b-d6e7a5fde7df-kube-api-access-rfqtr\") pod \"7dc25efe-0901-4ed2-a09b-d6e7a5fde7df\" (UID: \"7dc25efe-0901-4ed2-a09b-d6e7a5fde7df\") " Feb 24 09:22:20 crc kubenswrapper[4822]: I0224 09:22:20.933417 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dc25efe-0901-4ed2-a09b-d6e7a5fde7df-kube-api-access-rfqtr" (OuterVolumeSpecName: "kube-api-access-rfqtr") pod "7dc25efe-0901-4ed2-a09b-d6e7a5fde7df" (UID: "7dc25efe-0901-4ed2-a09b-d6e7a5fde7df"). InnerVolumeSpecName "kube-api-access-rfqtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:22:21 crc kubenswrapper[4822]: I0224 09:22:21.027455 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfqtr\" (UniqueName: \"kubernetes.io/projected/7dc25efe-0901-4ed2-a09b-d6e7a5fde7df-kube-api-access-rfqtr\") on node \"crc\" DevicePath \"\"" Feb 24 09:22:21 crc kubenswrapper[4822]: I0224 09:22:21.413652 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x4ksn" event={"ID":"bd34b663-65b1-416a-a677-0e32cee456b1","Type":"ContainerStarted","Data":"5d3d72f0518bb327069e4a09462939e7f1c0fc0a811156928bee1ab39c84a7a5"} Feb 24 09:22:21 crc kubenswrapper[4822]: I0224 09:22:21.414100 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x4ksn" event={"ID":"bd34b663-65b1-416a-a677-0e32cee456b1","Type":"ContainerStarted","Data":"43a98bbafe03737620514a7eeec9b81f1ad60b640557cd8e586f1da017110916"} Feb 24 09:22:21 crc kubenswrapper[4822]: I0224 09:22:21.416536 4822 generic.go:334] "Generic (PLEG): container finished" podID="7dc25efe-0901-4ed2-a09b-d6e7a5fde7df" containerID="72cdaa24335e320af2a933c6d92164c31934c8a25d00bb8c1e4fae0d95baeb79" exitCode=0 Feb 24 09:22:21 crc kubenswrapper[4822]: I0224 09:22:21.416624 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-b6vn4" event={"ID":"7dc25efe-0901-4ed2-a09b-d6e7a5fde7df","Type":"ContainerDied","Data":"72cdaa24335e320af2a933c6d92164c31934c8a25d00bb8c1e4fae0d95baeb79"} Feb 24 09:22:21 crc kubenswrapper[4822]: I0224 09:22:21.416645 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-b6vn4" Feb 24 09:22:21 crc kubenswrapper[4822]: I0224 09:22:21.416675 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-b6vn4" event={"ID":"7dc25efe-0901-4ed2-a09b-d6e7a5fde7df","Type":"ContainerDied","Data":"d0a670d7639e1e874a9d382d8c2940c7a297ac605beb6044e3c167f80768b643"} Feb 24 09:22:21 crc kubenswrapper[4822]: I0224 09:22:21.416727 4822 scope.go:117] "RemoveContainer" containerID="72cdaa24335e320af2a933c6d92164c31934c8a25d00bb8c1e4fae0d95baeb79" Feb 24 09:22:21 crc kubenswrapper[4822]: I0224 09:22:21.447340 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-x4ksn" podStartSLOduration=2.398354726 podStartE2EDuration="2.4473157s" podCreationTimestamp="2026-02-24 09:22:19 +0000 UTC" firstStartedPulling="2026-02-24 09:22:20.550883864 +0000 UTC m=+862.938646412" lastFinishedPulling="2026-02-24 09:22:20.599844838 +0000 UTC m=+862.987607386" observedRunningTime="2026-02-24 09:22:21.435631095 +0000 UTC m=+863.823393683" watchObservedRunningTime="2026-02-24 09:22:21.4473157 +0000 UTC m=+863.835078248" Feb 24 09:22:21 crc kubenswrapper[4822]: I0224 09:22:21.448503 4822 scope.go:117] "RemoveContainer" containerID="72cdaa24335e320af2a933c6d92164c31934c8a25d00bb8c1e4fae0d95baeb79" Feb 24 09:22:21 crc kubenswrapper[4822]: E0224 09:22:21.449253 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72cdaa24335e320af2a933c6d92164c31934c8a25d00bb8c1e4fae0d95baeb79\": container with ID starting with 72cdaa24335e320af2a933c6d92164c31934c8a25d00bb8c1e4fae0d95baeb79 not found: ID does not exist" containerID="72cdaa24335e320af2a933c6d92164c31934c8a25d00bb8c1e4fae0d95baeb79" Feb 24 09:22:21 crc kubenswrapper[4822]: I0224 09:22:21.449336 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72cdaa24335e320af2a933c6d92164c31934c8a25d00bb8c1e4fae0d95baeb79"} err="failed to get container status \"72cdaa24335e320af2a933c6d92164c31934c8a25d00bb8c1e4fae0d95baeb79\": rpc error: code = NotFound desc = could not find container \"72cdaa24335e320af2a933c6d92164c31934c8a25d00bb8c1e4fae0d95baeb79\": container with ID starting with 72cdaa24335e320af2a933c6d92164c31934c8a25d00bb8c1e4fae0d95baeb79 not found: ID does not exist" Feb 24 09:22:21 crc kubenswrapper[4822]: I0224 09:22:21.463672 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-b6vn4"] Feb 24 09:22:21 crc kubenswrapper[4822]: I0224 09:22:21.468636 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-b6vn4"] Feb 24 09:22:22 crc kubenswrapper[4822]: I0224 09:22:22.351719 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dc25efe-0901-4ed2-a09b-d6e7a5fde7df" path="/var/lib/kubelet/pods/7dc25efe-0901-4ed2-a09b-d6e7a5fde7df/volumes" Feb 24 09:22:30 crc kubenswrapper[4822]: I0224 09:22:30.249483 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-x4ksn" Feb 24 09:22:30 crc kubenswrapper[4822]: I0224 09:22:30.251210 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-x4ksn" Feb 24 09:22:30 crc kubenswrapper[4822]: I0224 09:22:30.299372 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-x4ksn" Feb 24 09:22:30 crc kubenswrapper[4822]: I0224 09:22:30.529265 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-x4ksn" Feb 24 09:22:31 crc kubenswrapper[4822]: I0224 09:22:31.986325 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/55e5d213f1433a4b9b5e0857d824c044fa1bafd1cd3a5aa0a04cbade3cqbz4h"] Feb 24 09:22:31 crc kubenswrapper[4822]: E0224 09:22:31.987091 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dc25efe-0901-4ed2-a09b-d6e7a5fde7df" containerName="registry-server" Feb 24 09:22:31 crc kubenswrapper[4822]: I0224 09:22:31.987114 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dc25efe-0901-4ed2-a09b-d6e7a5fde7df" containerName="registry-server" Feb 24 09:22:31 crc kubenswrapper[4822]: I0224 09:22:31.987330 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dc25efe-0901-4ed2-a09b-d6e7a5fde7df" containerName="registry-server" Feb 24 09:22:31 crc kubenswrapper[4822]: I0224 09:22:31.988800 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/55e5d213f1433a4b9b5e0857d824c044fa1bafd1cd3a5aa0a04cbade3cqbz4h" Feb 24 09:22:31 crc kubenswrapper[4822]: I0224 09:22:31.991389 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-zvtfb" Feb 24 09:22:32 crc kubenswrapper[4822]: I0224 09:22:32.004966 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/55e5d213f1433a4b9b5e0857d824c044fa1bafd1cd3a5aa0a04cbade3cqbz4h"] Feb 24 09:22:32 crc kubenswrapper[4822]: I0224 09:22:32.101652 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79dss\" (UniqueName: \"kubernetes.io/projected/79c55d89-05d9-41be-a8dc-1f2304a04f3e-kube-api-access-79dss\") pod \"55e5d213f1433a4b9b5e0857d824c044fa1bafd1cd3a5aa0a04cbade3cqbz4h\" (UID: \"79c55d89-05d9-41be-a8dc-1f2304a04f3e\") " pod="openstack-operators/55e5d213f1433a4b9b5e0857d824c044fa1bafd1cd3a5aa0a04cbade3cqbz4h" Feb 24 09:22:32 crc kubenswrapper[4822]: I0224 09:22:32.101737 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/79c55d89-05d9-41be-a8dc-1f2304a04f3e-util\") pod \"55e5d213f1433a4b9b5e0857d824c044fa1bafd1cd3a5aa0a04cbade3cqbz4h\" (UID: \"79c55d89-05d9-41be-a8dc-1f2304a04f3e\") " pod="openstack-operators/55e5d213f1433a4b9b5e0857d824c044fa1bafd1cd3a5aa0a04cbade3cqbz4h" Feb 24 09:22:32 crc kubenswrapper[4822]: I0224 09:22:32.101763 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/79c55d89-05d9-41be-a8dc-1f2304a04f3e-bundle\") pod \"55e5d213f1433a4b9b5e0857d824c044fa1bafd1cd3a5aa0a04cbade3cqbz4h\" (UID: \"79c55d89-05d9-41be-a8dc-1f2304a04f3e\") " pod="openstack-operators/55e5d213f1433a4b9b5e0857d824c044fa1bafd1cd3a5aa0a04cbade3cqbz4h" Feb 24 09:22:32 crc kubenswrapper[4822]: I0224 09:22:32.203715 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79dss\" (UniqueName: \"kubernetes.io/projected/79c55d89-05d9-41be-a8dc-1f2304a04f3e-kube-api-access-79dss\") pod \"55e5d213f1433a4b9b5e0857d824c044fa1bafd1cd3a5aa0a04cbade3cqbz4h\" (UID: \"79c55d89-05d9-41be-a8dc-1f2304a04f3e\") " pod="openstack-operators/55e5d213f1433a4b9b5e0857d824c044fa1bafd1cd3a5aa0a04cbade3cqbz4h" Feb 24 09:22:32 crc kubenswrapper[4822]: I0224 09:22:32.203819 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/79c55d89-05d9-41be-a8dc-1f2304a04f3e-util\") pod \"55e5d213f1433a4b9b5e0857d824c044fa1bafd1cd3a5aa0a04cbade3cqbz4h\" (UID: \"79c55d89-05d9-41be-a8dc-1f2304a04f3e\") " pod="openstack-operators/55e5d213f1433a4b9b5e0857d824c044fa1bafd1cd3a5aa0a04cbade3cqbz4h" Feb 24 09:22:32 crc kubenswrapper[4822]: I0224 09:22:32.203857 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/79c55d89-05d9-41be-a8dc-1f2304a04f3e-bundle\") pod \"55e5d213f1433a4b9b5e0857d824c044fa1bafd1cd3a5aa0a04cbade3cqbz4h\" (UID: \"79c55d89-05d9-41be-a8dc-1f2304a04f3e\") " pod="openstack-operators/55e5d213f1433a4b9b5e0857d824c044fa1bafd1cd3a5aa0a04cbade3cqbz4h" Feb 24 09:22:32 crc kubenswrapper[4822]: I0224 09:22:32.204658 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/79c55d89-05d9-41be-a8dc-1f2304a04f3e-bundle\") pod \"55e5d213f1433a4b9b5e0857d824c044fa1bafd1cd3a5aa0a04cbade3cqbz4h\" (UID: \"79c55d89-05d9-41be-a8dc-1f2304a04f3e\") " pod="openstack-operators/55e5d213f1433a4b9b5e0857d824c044fa1bafd1cd3a5aa0a04cbade3cqbz4h" Feb 24 09:22:32 crc kubenswrapper[4822]: I0224 09:22:32.204802 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/79c55d89-05d9-41be-a8dc-1f2304a04f3e-util\") pod \"55e5d213f1433a4b9b5e0857d824c044fa1bafd1cd3a5aa0a04cbade3cqbz4h\" (UID: \"79c55d89-05d9-41be-a8dc-1f2304a04f3e\") " pod="openstack-operators/55e5d213f1433a4b9b5e0857d824c044fa1bafd1cd3a5aa0a04cbade3cqbz4h" Feb 24 09:22:32 crc kubenswrapper[4822]: I0224 09:22:32.238319 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79dss\" (UniqueName: \"kubernetes.io/projected/79c55d89-05d9-41be-a8dc-1f2304a04f3e-kube-api-access-79dss\") pod \"55e5d213f1433a4b9b5e0857d824c044fa1bafd1cd3a5aa0a04cbade3cqbz4h\" (UID: \"79c55d89-05d9-41be-a8dc-1f2304a04f3e\") " pod="openstack-operators/55e5d213f1433a4b9b5e0857d824c044fa1bafd1cd3a5aa0a04cbade3cqbz4h" Feb 24 09:22:32 crc kubenswrapper[4822]: I0224 09:22:32.316145 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/55e5d213f1433a4b9b5e0857d824c044fa1bafd1cd3a5aa0a04cbade3cqbz4h" Feb 24 09:22:32 crc kubenswrapper[4822]: I0224 09:22:32.777509 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/55e5d213f1433a4b9b5e0857d824c044fa1bafd1cd3a5aa0a04cbade3cqbz4h"] Feb 24 09:22:32 crc kubenswrapper[4822]: W0224 09:22:32.788021 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79c55d89_05d9_41be_a8dc_1f2304a04f3e.slice/crio-2c602688effbcbfd22ad7a6137342dd87163c493f7e611c85742b394355ffb1f WatchSource:0}: Error finding container 2c602688effbcbfd22ad7a6137342dd87163c493f7e611c85742b394355ffb1f: Status 404 returned error can't find the container with id 2c602688effbcbfd22ad7a6137342dd87163c493f7e611c85742b394355ffb1f Feb 24 09:22:33 crc kubenswrapper[4822]: I0224 09:22:33.512296 4822 generic.go:334] "Generic (PLEG): container finished" podID="79c55d89-05d9-41be-a8dc-1f2304a04f3e" containerID="0320c551c7e8434612539b96c672e829fdeb315d0f92069a679fc7f0a167caef" exitCode=0 Feb 24 09:22:33 crc kubenswrapper[4822]: I0224 09:22:33.512392 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/55e5d213f1433a4b9b5e0857d824c044fa1bafd1cd3a5aa0a04cbade3cqbz4h" event={"ID":"79c55d89-05d9-41be-a8dc-1f2304a04f3e","Type":"ContainerDied","Data":"0320c551c7e8434612539b96c672e829fdeb315d0f92069a679fc7f0a167caef"} Feb 24 09:22:33 crc kubenswrapper[4822]: I0224 09:22:33.512596 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/55e5d213f1433a4b9b5e0857d824c044fa1bafd1cd3a5aa0a04cbade3cqbz4h" event={"ID":"79c55d89-05d9-41be-a8dc-1f2304a04f3e","Type":"ContainerStarted","Data":"2c602688effbcbfd22ad7a6137342dd87163c493f7e611c85742b394355ffb1f"} Feb 24 09:22:34 crc kubenswrapper[4822]: I0224 09:22:34.524170 4822 generic.go:334] "Generic (PLEG): container finished" podID="79c55d89-05d9-41be-a8dc-1f2304a04f3e" containerID="8c40238d256e15aca43cde87d1ed36887ec42725ad9cb983106a70c58387e7e5" exitCode=0 Feb 24 09:22:34 crc kubenswrapper[4822]: I0224 09:22:34.524229 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/55e5d213f1433a4b9b5e0857d824c044fa1bafd1cd3a5aa0a04cbade3cqbz4h" event={"ID":"79c55d89-05d9-41be-a8dc-1f2304a04f3e","Type":"ContainerDied","Data":"8c40238d256e15aca43cde87d1ed36887ec42725ad9cb983106a70c58387e7e5"} Feb 24 09:22:35 crc kubenswrapper[4822]: I0224 09:22:35.536084 4822 generic.go:334] "Generic (PLEG): container finished" podID="79c55d89-05d9-41be-a8dc-1f2304a04f3e" containerID="c8caa9516c4b0d77c180586be2ecf8032a0fcf1783721808aaee85edaa5aa52d" exitCode=0 Feb 24 09:22:35 crc kubenswrapper[4822]: I0224 09:22:35.536311 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/55e5d213f1433a4b9b5e0857d824c044fa1bafd1cd3a5aa0a04cbade3cqbz4h" event={"ID":"79c55d89-05d9-41be-a8dc-1f2304a04f3e","Type":"ContainerDied","Data":"c8caa9516c4b0d77c180586be2ecf8032a0fcf1783721808aaee85edaa5aa52d"} Feb 24 09:22:36 crc kubenswrapper[4822]: I0224 09:22:36.860688 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/55e5d213f1433a4b9b5e0857d824c044fa1bafd1cd3a5aa0a04cbade3cqbz4h" Feb 24 09:22:36 crc kubenswrapper[4822]: I0224 09:22:36.986110 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/79c55d89-05d9-41be-a8dc-1f2304a04f3e-util\") pod \"79c55d89-05d9-41be-a8dc-1f2304a04f3e\" (UID: \"79c55d89-05d9-41be-a8dc-1f2304a04f3e\") " Feb 24 09:22:36 crc kubenswrapper[4822]: I0224 09:22:36.986202 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79dss\" (UniqueName: \"kubernetes.io/projected/79c55d89-05d9-41be-a8dc-1f2304a04f3e-kube-api-access-79dss\") pod \"79c55d89-05d9-41be-a8dc-1f2304a04f3e\" (UID: \"79c55d89-05d9-41be-a8dc-1f2304a04f3e\") " Feb 24 09:22:36 crc kubenswrapper[4822]: I0224 09:22:36.986278 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/79c55d89-05d9-41be-a8dc-1f2304a04f3e-bundle\") pod \"79c55d89-05d9-41be-a8dc-1f2304a04f3e\" (UID: \"79c55d89-05d9-41be-a8dc-1f2304a04f3e\") " Feb 24 09:22:36 crc kubenswrapper[4822]: I0224 09:22:36.987758 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79c55d89-05d9-41be-a8dc-1f2304a04f3e-bundle" (OuterVolumeSpecName: "bundle") pod "79c55d89-05d9-41be-a8dc-1f2304a04f3e" (UID: "79c55d89-05d9-41be-a8dc-1f2304a04f3e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:22:36 crc kubenswrapper[4822]: I0224 09:22:36.995746 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79c55d89-05d9-41be-a8dc-1f2304a04f3e-kube-api-access-79dss" (OuterVolumeSpecName: "kube-api-access-79dss") pod "79c55d89-05d9-41be-a8dc-1f2304a04f3e" (UID: "79c55d89-05d9-41be-a8dc-1f2304a04f3e"). InnerVolumeSpecName "kube-api-access-79dss". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:22:37 crc kubenswrapper[4822]: I0224 09:22:37.016066 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79c55d89-05d9-41be-a8dc-1f2304a04f3e-util" (OuterVolumeSpecName: "util") pod "79c55d89-05d9-41be-a8dc-1f2304a04f3e" (UID: "79c55d89-05d9-41be-a8dc-1f2304a04f3e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:22:37 crc kubenswrapper[4822]: I0224 09:22:37.088897 4822 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/79c55d89-05d9-41be-a8dc-1f2304a04f3e-util\") on node \"crc\" DevicePath \"\"" Feb 24 09:22:37 crc kubenswrapper[4822]: I0224 09:22:37.089005 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79dss\" (UniqueName: \"kubernetes.io/projected/79c55d89-05d9-41be-a8dc-1f2304a04f3e-kube-api-access-79dss\") on node \"crc\" DevicePath \"\"" Feb 24 09:22:37 crc kubenswrapper[4822]: I0224 09:22:37.089033 4822 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/79c55d89-05d9-41be-a8dc-1f2304a04f3e-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 09:22:37 crc kubenswrapper[4822]: I0224 09:22:37.557675 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/55e5d213f1433a4b9b5e0857d824c044fa1bafd1cd3a5aa0a04cbade3cqbz4h" event={"ID":"79c55d89-05d9-41be-a8dc-1f2304a04f3e","Type":"ContainerDied","Data":"2c602688effbcbfd22ad7a6137342dd87163c493f7e611c85742b394355ffb1f"} Feb 24 09:22:37 crc kubenswrapper[4822]: I0224 09:22:37.557732 4822 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c602688effbcbfd22ad7a6137342dd87163c493f7e611c85742b394355ffb1f" Feb 24 09:22:37 crc kubenswrapper[4822]: I0224 09:22:37.557775 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/55e5d213f1433a4b9b5e0857d824c044fa1bafd1cd3a5aa0a04cbade3cqbz4h" Feb 24 09:22:44 crc kubenswrapper[4822]: I0224 09:22:44.634559 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-79475b8ff-fpm6s"] Feb 24 09:22:44 crc kubenswrapper[4822]: E0224 09:22:44.635438 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79c55d89-05d9-41be-a8dc-1f2304a04f3e" containerName="pull" Feb 24 09:22:44 crc kubenswrapper[4822]: I0224 09:22:44.635455 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="79c55d89-05d9-41be-a8dc-1f2304a04f3e" containerName="pull" Feb 24 09:22:44 crc kubenswrapper[4822]: E0224 09:22:44.635468 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79c55d89-05d9-41be-a8dc-1f2304a04f3e" containerName="extract" Feb 24 09:22:44 crc kubenswrapper[4822]: I0224 09:22:44.635476 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="79c55d89-05d9-41be-a8dc-1f2304a04f3e" containerName="extract" Feb 24 09:22:44 crc kubenswrapper[4822]: E0224 09:22:44.635495 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79c55d89-05d9-41be-a8dc-1f2304a04f3e" containerName="util" Feb 24 09:22:44 crc kubenswrapper[4822]: I0224 09:22:44.635503 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="79c55d89-05d9-41be-a8dc-1f2304a04f3e" containerName="util" Feb 24 09:22:44 crc kubenswrapper[4822]: I0224 09:22:44.635638 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="79c55d89-05d9-41be-a8dc-1f2304a04f3e" containerName="extract" Feb 24 09:22:44 crc kubenswrapper[4822]: I0224 09:22:44.636125 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-79475b8ff-fpm6s" Feb 24 09:22:44 crc kubenswrapper[4822]: I0224 09:22:44.638639 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-vdksd" Feb 24 09:22:44 crc kubenswrapper[4822]: I0224 09:22:44.657824 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-79475b8ff-fpm6s"] Feb 24 09:22:44 crc kubenswrapper[4822]: I0224 09:22:44.794651 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4spn\" (UniqueName: \"kubernetes.io/projected/1f43369b-85d3-454a-8e94-cad4802e606a-kube-api-access-f4spn\") pod \"openstack-operator-controller-init-79475b8ff-fpm6s\" (UID: \"1f43369b-85d3-454a-8e94-cad4802e606a\") " pod="openstack-operators/openstack-operator-controller-init-79475b8ff-fpm6s" Feb 24 09:22:44 crc kubenswrapper[4822]: I0224 09:22:44.896565 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4spn\" (UniqueName: \"kubernetes.io/projected/1f43369b-85d3-454a-8e94-cad4802e606a-kube-api-access-f4spn\") pod \"openstack-operator-controller-init-79475b8ff-fpm6s\" (UID: \"1f43369b-85d3-454a-8e94-cad4802e606a\") " pod="openstack-operators/openstack-operator-controller-init-79475b8ff-fpm6s" Feb 24 09:22:44 crc kubenswrapper[4822]: I0224 09:22:44.917904 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4spn\" (UniqueName: \"kubernetes.io/projected/1f43369b-85d3-454a-8e94-cad4802e606a-kube-api-access-f4spn\") pod \"openstack-operator-controller-init-79475b8ff-fpm6s\" (UID: \"1f43369b-85d3-454a-8e94-cad4802e606a\") " pod="openstack-operators/openstack-operator-controller-init-79475b8ff-fpm6s" Feb 24 09:22:44 crc kubenswrapper[4822]: I0224 09:22:44.994732 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-79475b8ff-fpm6s" Feb 24 09:22:45 crc kubenswrapper[4822]: I0224 09:22:45.302268 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-79475b8ff-fpm6s"] Feb 24 09:22:45 crc kubenswrapper[4822]: I0224 09:22:45.625475 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-79475b8ff-fpm6s" event={"ID":"1f43369b-85d3-454a-8e94-cad4802e606a","Type":"ContainerStarted","Data":"5b2aa14f8b47a2de2edca9de7e1c34561df6d04bdb5bfd8650a8ddd151f3d7c7"} Feb 24 09:22:49 crc kubenswrapper[4822]: I0224 09:22:49.661237 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-79475b8ff-fpm6s" event={"ID":"1f43369b-85d3-454a-8e94-cad4802e606a","Type":"ContainerStarted","Data":"9a763af416f8aa13511b76acddac90a81929bd373baf816fab6d14aba1179764"} Feb 24 09:22:49 crc kubenswrapper[4822]: I0224 09:22:49.663315 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-79475b8ff-fpm6s" Feb 24 09:22:49 crc kubenswrapper[4822]: I0224 09:22:49.707899 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-79475b8ff-fpm6s" podStartSLOduration=2.092360148 podStartE2EDuration="5.707880137s" podCreationTimestamp="2026-02-24 09:22:44 +0000 UTC" firstStartedPulling="2026-02-24 09:22:45.311278552 +0000 UTC m=+887.699041100" lastFinishedPulling="2026-02-24 09:22:48.926798541 +0000 UTC m=+891.314561089" observedRunningTime="2026-02-24 09:22:49.704934838 +0000 UTC m=+892.092697386" watchObservedRunningTime="2026-02-24 09:22:49.707880137 +0000 UTC m=+892.095642685" Feb 24 09:22:55 crc kubenswrapper[4822]: I0224 09:22:54.999352 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-79475b8ff-fpm6s" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.282296 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-ttdt9"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.283662 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-ttdt9" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.289488 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-gpdd5"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.290550 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-gpdd5" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.292444 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-rxb94" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.293101 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-b8xlb" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.303724 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-gpdd5"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.314756 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-ttdt9"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.321027 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-xlfx2"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.321712 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-xlfx2" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.324751 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-52664" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.338534 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-xlfx2"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.348238 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-wpr57"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.348978 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-wpr57" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.350615 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-82dnm" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.386902 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-wpr57"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.417992 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-hzf8q"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.418777 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hzf8q" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.421441 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-gfvr6" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.423207 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-hzf8q"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.445248 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-z6hzm"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.446024 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-z6hzm" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.448291 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-79xhf" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.459748 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf77w\" (UniqueName: \"kubernetes.io/projected/70026c66-cd4a-4c18-b215-25aa4b2ae4e3-kube-api-access-xf77w\") pod \"barbican-operator-controller-manager-868647ff47-ttdt9\" (UID: \"70026c66-cd4a-4c18-b215-25aa4b2ae4e3\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-ttdt9" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.459783 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2j6g\" (UniqueName: \"kubernetes.io/projected/5b5b31b6-1f4a-494e-8050-69c1a5bd4ba1-kube-api-access-g2j6g\") pod \"cinder-operator-controller-manager-55d77d7b5c-gpdd5\" (UID: \"5b5b31b6-1f4a-494e-8050-69c1a5bd4ba1\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-gpdd5" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.459817 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szx5t\" (UniqueName: \"kubernetes.io/projected/3fef6561-ad82-4d3d-ba59-cd7140dd9f05-kube-api-access-szx5t\") pod \"glance-operator-controller-manager-784b5bb6c5-wpr57\" (UID: \"3fef6561-ad82-4d3d-ba59-cd7140dd9f05\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-wpr57" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.459833 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4p4x\" (UniqueName: \"kubernetes.io/projected/83e243fe-42cf-4cfb-887f-eb7ce40b8acc-kube-api-access-x4p4x\") pod \"designate-operator-controller-manager-6d8bf5c495-xlfx2\" (UID: \"83e243fe-42cf-4cfb-887f-eb7ce40b8acc\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-xlfx2" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.459875 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfx9k\" (UniqueName: \"kubernetes.io/projected/ee4db1d0-39bb-4442-b59c-f355df63cca5-kube-api-access-mfx9k\") pod \"horizon-operator-controller-manager-5b9b8895d5-z6hzm\" (UID: \"ee4db1d0-39bb-4442-b59c-f355df63cca5\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-z6hzm" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.459929 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjsx4\" (UniqueName: \"kubernetes.io/projected/ae2eb0b5-cb1a-4716-8680-5588a4cc06c2-kube-api-access-kjsx4\") pod \"heat-operator-controller-manager-69f49c598c-hzf8q\" (UID: \"ae2eb0b5-cb1a-4716-8680-5588a4cc06c2\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hzf8q" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.468883 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-z6hzm"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.479949 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-8bp8c"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.480702 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-8bp8c" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.492664 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-mslnj"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.493447 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-mslnj" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.498273 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-8bp8c"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.502927 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.503818 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-955fb" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.504392 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-k4qzl" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.518945 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-mslnj"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.529516 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-wttq9"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.530212 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wttq9" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.534146 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-zv9pf" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.534453 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-n6x49"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.534944 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-n6x49" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.538819 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-2gh7x" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.545170 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-wttq9"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.548770 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-n6x49"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.578811 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-h5c46"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.586873 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-h5c46" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.621329 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-2xvld" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.621774 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfx9k\" (UniqueName: \"kubernetes.io/projected/ee4db1d0-39bb-4442-b59c-f355df63cca5-kube-api-access-mfx9k\") pod \"horizon-operator-controller-manager-5b9b8895d5-z6hzm\" (UID: \"ee4db1d0-39bb-4442-b59c-f355df63cca5\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-z6hzm" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.621867 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjsx4\" (UniqueName: \"kubernetes.io/projected/ae2eb0b5-cb1a-4716-8680-5588a4cc06c2-kube-api-access-kjsx4\") pod \"heat-operator-controller-manager-69f49c598c-hzf8q\" (UID: \"ae2eb0b5-cb1a-4716-8680-5588a4cc06c2\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hzf8q" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.621967 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf77w\" (UniqueName: \"kubernetes.io/projected/70026c66-cd4a-4c18-b215-25aa4b2ae4e3-kube-api-access-xf77w\") pod \"barbican-operator-controller-manager-868647ff47-ttdt9\" (UID: \"70026c66-cd4a-4c18-b215-25aa4b2ae4e3\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-ttdt9" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.621998 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2j6g\" (UniqueName: \"kubernetes.io/projected/5b5b31b6-1f4a-494e-8050-69c1a5bd4ba1-kube-api-access-g2j6g\") pod \"cinder-operator-controller-manager-55d77d7b5c-gpdd5\" (UID: \"5b5b31b6-1f4a-494e-8050-69c1a5bd4ba1\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-gpdd5" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.622065 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szx5t\" (UniqueName: \"kubernetes.io/projected/3fef6561-ad82-4d3d-ba59-cd7140dd9f05-kube-api-access-szx5t\") pod \"glance-operator-controller-manager-784b5bb6c5-wpr57\" (UID: \"3fef6561-ad82-4d3d-ba59-cd7140dd9f05\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-wpr57" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.622086 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4p4x\" (UniqueName: \"kubernetes.io/projected/83e243fe-42cf-4cfb-887f-eb7ce40b8acc-kube-api-access-x4p4x\") pod \"designate-operator-controller-manager-6d8bf5c495-xlfx2\" (UID: \"83e243fe-42cf-4cfb-887f-eb7ce40b8acc\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-xlfx2" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.640972 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-h5c46"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.677806 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szx5t\" (UniqueName: \"kubernetes.io/projected/3fef6561-ad82-4d3d-ba59-cd7140dd9f05-kube-api-access-szx5t\") pod \"glance-operator-controller-manager-784b5bb6c5-wpr57\" (UID: \"3fef6561-ad82-4d3d-ba59-cd7140dd9f05\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-wpr57" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.679040 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-7rmlj"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.679527 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjsx4\" (UniqueName: \"kubernetes.io/projected/ae2eb0b5-cb1a-4716-8680-5588a4cc06c2-kube-api-access-kjsx4\") pod \"heat-operator-controller-manager-69f49c598c-hzf8q\" (UID: \"ae2eb0b5-cb1a-4716-8680-5588a4cc06c2\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hzf8q" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.679927 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-7rmlj" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.680055 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4p4x\" (UniqueName: \"kubernetes.io/projected/83e243fe-42cf-4cfb-887f-eb7ce40b8acc-kube-api-access-x4p4x\") pod \"designate-operator-controller-manager-6d8bf5c495-xlfx2\" (UID: \"83e243fe-42cf-4cfb-887f-eb7ce40b8acc\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-xlfx2" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.689008 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-7q6xs" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.689670 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-xlfx2" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.690273 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfx9k\" (UniqueName: \"kubernetes.io/projected/ee4db1d0-39bb-4442-b59c-f355df63cca5-kube-api-access-mfx9k\") pod \"horizon-operator-controller-manager-5b9b8895d5-z6hzm\" (UID: \"ee4db1d0-39bb-4442-b59c-f355df63cca5\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-z6hzm" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.698299 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2j6g\" (UniqueName: \"kubernetes.io/projected/5b5b31b6-1f4a-494e-8050-69c1a5bd4ba1-kube-api-access-g2j6g\") pod \"cinder-operator-controller-manager-55d77d7b5c-gpdd5\" (UID: \"5b5b31b6-1f4a-494e-8050-69c1a5bd4ba1\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-gpdd5" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.698669 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf77w\" (UniqueName: \"kubernetes.io/projected/70026c66-cd4a-4c18-b215-25aa4b2ae4e3-kube-api-access-xf77w\") pod \"barbican-operator-controller-manager-868647ff47-ttdt9\" (UID: \"70026c66-cd4a-4c18-b215-25aa4b2ae4e3\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-ttdt9" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.704338 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-gqnp9"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.705081 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gqnp9" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.709249 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-9jmkf" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.728414 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d1da3e72-10c4-40c4-8add-a162123dc693-cert\") pod \"infra-operator-controller-manager-79d975b745-8bp8c\" (UID: \"d1da3e72-10c4-40c4-8add-a162123dc693\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-8bp8c" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.728454 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j9q8\" (UniqueName: \"kubernetes.io/projected/63991af3-d322-4671-811d-f48f063a77a3-kube-api-access-2j9q8\") pod \"manila-operator-controller-manager-67d996989d-n6x49\" (UID: \"63991af3-d322-4671-811d-f48f063a77a3\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-n6x49" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.728490 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n4qn\" (UniqueName: \"kubernetes.io/projected/7d1d716a-d289-4282-af6f-ce4f6965f566-kube-api-access-2n4qn\") pod \"neutron-operator-controller-manager-6bd4687957-7rmlj\" (UID: \"7d1d716a-d289-4282-af6f-ce4f6965f566\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-7rmlj" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.728530 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt5p4\" (UniqueName: \"kubernetes.io/projected/c6225e3b-5166-4d5e-b0a8-03c74b182182-kube-api-access-kt5p4\") pod \"mariadb-operator-controller-manager-6994f66f48-h5c46\" (UID: \"c6225e3b-5166-4d5e-b0a8-03c74b182182\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-h5c46" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.728555 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbc8t\" (UniqueName: \"kubernetes.io/projected/fd4ef780-76b3-48b1-8408-d92bb8d55566-kube-api-access-sbc8t\") pod \"ironic-operator-controller-manager-554564d7fc-mslnj\" (UID: \"fd4ef780-76b3-48b1-8408-d92bb8d55566\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-mslnj" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.728586 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wj2r\" (UniqueName: \"kubernetes.io/projected/d1da3e72-10c4-40c4-8add-a162123dc693-kube-api-access-9wj2r\") pod \"infra-operator-controller-manager-79d975b745-8bp8c\" (UID: \"d1da3e72-10c4-40c4-8add-a162123dc693\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-8bp8c" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.728603 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkgrn\" (UniqueName: \"kubernetes.io/projected/ec52c5d9-7734-4cd8-90c7-2df3617bb887-kube-api-access-xkgrn\") pod \"nova-operator-controller-manager-567668f5cf-gqnp9\" (UID: \"ec52c5d9-7734-4cd8-90c7-2df3617bb887\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gqnp9" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.728618 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpqln\" (UniqueName: \"kubernetes.io/projected/fe0d5f37-87cd-4f29-97c5-9253686da873-kube-api-access-tpqln\") pod \"keystone-operator-controller-manager-b4d948c87-wttq9\" (UID: \"fe0d5f37-87cd-4f29-97c5-9253686da873\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wttq9" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.728741 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-wpr57" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.739234 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hzf8q" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.747231 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-gqnp9"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.759955 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-7rmlj"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.763892 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-z6hzm" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.786089 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-68ppf"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.787063 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-68ppf" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.800717 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-m7tms" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.818031 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-68ppf"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.829421 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpqln\" (UniqueName: \"kubernetes.io/projected/fe0d5f37-87cd-4f29-97c5-9253686da873-kube-api-access-tpqln\") pod \"keystone-operator-controller-manager-b4d948c87-wttq9\" (UID: \"fe0d5f37-87cd-4f29-97c5-9253686da873\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wttq9" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.829454 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkgrn\" (UniqueName: \"kubernetes.io/projected/ec52c5d9-7734-4cd8-90c7-2df3617bb887-kube-api-access-xkgrn\") pod \"nova-operator-controller-manager-567668f5cf-gqnp9\" (UID: \"ec52c5d9-7734-4cd8-90c7-2df3617bb887\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gqnp9" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.829481 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d1da3e72-10c4-40c4-8add-a162123dc693-cert\") pod \"infra-operator-controller-manager-79d975b745-8bp8c\" (UID: \"d1da3e72-10c4-40c4-8add-a162123dc693\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-8bp8c" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.829499 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j9q8\" (UniqueName: \"kubernetes.io/projected/63991af3-d322-4671-811d-f48f063a77a3-kube-api-access-2j9q8\") pod \"manila-operator-controller-manager-67d996989d-n6x49\" (UID: \"63991af3-d322-4671-811d-f48f063a77a3\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-n6x49" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.829723 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-67bxj"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.829800 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n4qn\" (UniqueName: \"kubernetes.io/projected/7d1d716a-d289-4282-af6f-ce4f6965f566-kube-api-access-2n4qn\") pod \"neutron-operator-controller-manager-6bd4687957-7rmlj\" (UID: \"7d1d716a-d289-4282-af6f-ce4f6965f566\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-7rmlj" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.829872 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt5p4\" (UniqueName: \"kubernetes.io/projected/c6225e3b-5166-4d5e-b0a8-03c74b182182-kube-api-access-kt5p4\") pod \"mariadb-operator-controller-manager-6994f66f48-h5c46\" (UID: \"c6225e3b-5166-4d5e-b0a8-03c74b182182\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-h5c46" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.829938 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbc8t\" (UniqueName: \"kubernetes.io/projected/fd4ef780-76b3-48b1-8408-d92bb8d55566-kube-api-access-sbc8t\") pod \"ironic-operator-controller-manager-554564d7fc-mslnj\" (UID: \"fd4ef780-76b3-48b1-8408-d92bb8d55566\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-mslnj" Feb 24 09:23:15 crc kubenswrapper[4822]: E0224 09:23:15.829979 4822 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 24 09:23:15 crc kubenswrapper[4822]: E0224 09:23:15.830022 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1da3e72-10c4-40c4-8add-a162123dc693-cert podName:d1da3e72-10c4-40c4-8add-a162123dc693 nodeName:}" failed. No retries permitted until 2026-02-24 09:23:16.330007076 +0000 UTC m=+918.717769614 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d1da3e72-10c4-40c4-8add-a162123dc693-cert") pod "infra-operator-controller-manager-79d975b745-8bp8c" (UID: "d1da3e72-10c4-40c4-8add-a162123dc693") : secret "infra-operator-webhook-server-cert" not found Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.829980 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wj2r\" (UniqueName: \"kubernetes.io/projected/d1da3e72-10c4-40c4-8add-a162123dc693-kube-api-access-9wj2r\") pod \"infra-operator-controller-manager-79d975b745-8bp8c\" (UID: \"d1da3e72-10c4-40c4-8add-a162123dc693\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-8bp8c" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.830623 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-67bxj" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.832681 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-7r85b" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.836656 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpzxhl"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.837578 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpzxhl" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.840510 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-8ms9l" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.840645 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.840726 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-67bxj"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.851018 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-kd8sb"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.851822 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbc8t\" (UniqueName: \"kubernetes.io/projected/fd4ef780-76b3-48b1-8408-d92bb8d55566-kube-api-access-sbc8t\") pod \"ironic-operator-controller-manager-554564d7fc-mslnj\" (UID: \"fd4ef780-76b3-48b1-8408-d92bb8d55566\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-mslnj" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.852228 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-kd8sb" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.854431 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-7xgt4"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.855796 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-7xgt4" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.868025 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-kd8sb"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.868350 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-jvbk8" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.870602 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-2d4c5" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.871014 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j9q8\" (UniqueName: \"kubernetes.io/projected/63991af3-d322-4671-811d-f48f063a77a3-kube-api-access-2j9q8\") pod \"manila-operator-controller-manager-67d996989d-n6x49\" (UID: \"63991af3-d322-4671-811d-f48f063a77a3\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-n6x49" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.871568 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n4qn\" (UniqueName: \"kubernetes.io/projected/7d1d716a-d289-4282-af6f-ce4f6965f566-kube-api-access-2n4qn\") pod \"neutron-operator-controller-manager-6bd4687957-7rmlj\" (UID: \"7d1d716a-d289-4282-af6f-ce4f6965f566\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-7rmlj" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.878537 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpqln\" (UniqueName: \"kubernetes.io/projected/fe0d5f37-87cd-4f29-97c5-9253686da873-kube-api-access-tpqln\") pod \"keystone-operator-controller-manager-b4d948c87-wttq9\" (UID: \"fe0d5f37-87cd-4f29-97c5-9253686da873\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wttq9" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.878598 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-589c568786-mwwrn"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.878902 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wj2r\" (UniqueName: \"kubernetes.io/projected/d1da3e72-10c4-40c4-8add-a162123dc693-kube-api-access-9wj2r\") pod \"infra-operator-controller-manager-79d975b745-8bp8c\" (UID: \"d1da3e72-10c4-40c4-8add-a162123dc693\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-8bp8c" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.880467 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-mwwrn" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.880600 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkgrn\" (UniqueName: \"kubernetes.io/projected/ec52c5d9-7734-4cd8-90c7-2df3617bb887-kube-api-access-xkgrn\") pod \"nova-operator-controller-manager-567668f5cf-gqnp9\" (UID: \"ec52c5d9-7734-4cd8-90c7-2df3617bb887\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gqnp9" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.881086 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-n6x49" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.883262 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-7xgt4"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.884785 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-m6r2z" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.886416 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt5p4\" (UniqueName: \"kubernetes.io/projected/c6225e3b-5166-4d5e-b0a8-03c74b182182-kube-api-access-kt5p4\") pod \"mariadb-operator-controller-manager-6994f66f48-h5c46\" (UID: \"c6225e3b-5166-4d5e-b0a8-03c74b182182\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-h5c46" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.892846 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-589c568786-mwwrn"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.900450 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpzxhl"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.906749 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-gg8lt"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.915904 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-gg8lt" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.922045 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-gg8lt"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.923252 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-ttdt9" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.929214 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-lczml" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.932067 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbjp8\" (UniqueName: \"kubernetes.io/projected/9f3fa61d-3144-4ae6-842d-81b2cdd5476e-kube-api-access-zbjp8\") pod \"octavia-operator-controller-manager-659dc6bbfc-67bxj\" (UID: \"9f3fa61d-3144-4ae6-842d-81b2cdd5476e\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-67bxj" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.932128 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b99q2\" (UniqueName: \"kubernetes.io/projected/1d74d19d-7487-4df7-9086-25d448c200ac-kube-api-access-b99q2\") pod \"telemetry-operator-controller-manager-589c568786-mwwrn\" (UID: \"1d74d19d-7487-4df7-9086-25d448c200ac\") " pod="openstack-operators/telemetry-operator-controller-manager-589c568786-mwwrn" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.932149 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2dzm\" (UniqueName: \"kubernetes.io/projected/d28f5174-ffbd-475c-91ea-ea9d2776495a-kube-api-access-v2dzm\") pod \"placement-operator-controller-manager-8497b45c89-kd8sb\" (UID: \"d28f5174-ffbd-475c-91ea-ea9d2776495a\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-kd8sb" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.932181 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg5px\" (UniqueName: \"kubernetes.io/projected/1e6e9c86-431e-4467-85a6-57c089a101f7-kube-api-access-tg5px\") pod \"swift-operator-controller-manager-68f46476f-7xgt4\" (UID: \"1e6e9c86-431e-4467-85a6-57c089a101f7\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-7xgt4" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.932208 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfd66\" (UniqueName: \"kubernetes.io/projected/f0441fa9-b0f9-4b7a-ab83-4b8262352c84-kube-api-access-tfd66\") pod \"test-operator-controller-manager-5dc6794d5b-gg8lt\" (UID: \"f0441fa9-b0f9-4b7a-ab83-4b8262352c84\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-gg8lt" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.932239 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-224l9\" (UniqueName: \"kubernetes.io/projected/fadde01c-c670-4a09-b0af-35c60b5542f2-kube-api-access-224l9\") pod \"ovn-operator-controller-manager-5955d8c787-68ppf\" (UID: \"fadde01c-c670-4a09-b0af-35c60b5542f2\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-68ppf" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.932276 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvjxf\" (UniqueName: \"kubernetes.io/projected/884fa763-5a67-4fd5-a578-7c6d81245cb0-kube-api-access-jvjxf\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpzxhl\" (UID: \"884fa763-5a67-4fd5-a578-7c6d81245cb0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpzxhl" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.932295 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/884fa763-5a67-4fd5-a578-7c6d81245cb0-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpzxhl\" (UID: \"884fa763-5a67-4fd5-a578-7c6d81245cb0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpzxhl" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.940244 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-vb9w9"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.941157 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-vb9w9" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.941733 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-gpdd5" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.942176 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-h5c46" Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.948173 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-vb9w9"] Feb 24 09:23:15 crc kubenswrapper[4822]: I0224 09:23:15.964144 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-hxncx" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.007258 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5d9b665c78-8n22w"] Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.021346 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5d9b665c78-8n22w" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.023268 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.024848 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-4jtwn" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.026281 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.048113 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg5px\" (UniqueName: \"kubernetes.io/projected/1e6e9c86-431e-4467-85a6-57c089a101f7-kube-api-access-tg5px\") pod \"swift-operator-controller-manager-68f46476f-7xgt4\" (UID: \"1e6e9c86-431e-4467-85a6-57c089a101f7\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-7xgt4" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.048194 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfd66\" (UniqueName: \"kubernetes.io/projected/f0441fa9-b0f9-4b7a-ab83-4b8262352c84-kube-api-access-tfd66\") pod \"test-operator-controller-manager-5dc6794d5b-gg8lt\" (UID: \"f0441fa9-b0f9-4b7a-ab83-4b8262352c84\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-gg8lt" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.048246 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-224l9\" (UniqueName: \"kubernetes.io/projected/fadde01c-c670-4a09-b0af-35c60b5542f2-kube-api-access-224l9\") pod \"ovn-operator-controller-manager-5955d8c787-68ppf\" (UID: \"fadde01c-c670-4a09-b0af-35c60b5542f2\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-68ppf" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.048279 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvjxf\" (UniqueName: \"kubernetes.io/projected/884fa763-5a67-4fd5-a578-7c6d81245cb0-kube-api-access-jvjxf\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpzxhl\" (UID: \"884fa763-5a67-4fd5-a578-7c6d81245cb0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpzxhl" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.048316 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/884fa763-5a67-4fd5-a578-7c6d81245cb0-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpzxhl\" (UID: \"884fa763-5a67-4fd5-a578-7c6d81245cb0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpzxhl" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.048374 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbjp8\" (UniqueName: \"kubernetes.io/projected/9f3fa61d-3144-4ae6-842d-81b2cdd5476e-kube-api-access-zbjp8\") pod \"octavia-operator-controller-manager-659dc6bbfc-67bxj\" (UID: \"9f3fa61d-3144-4ae6-842d-81b2cdd5476e\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-67bxj" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.048400 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b99q2\" (UniqueName: \"kubernetes.io/projected/1d74d19d-7487-4df7-9086-25d448c200ac-kube-api-access-b99q2\") pod \"telemetry-operator-controller-manager-589c568786-mwwrn\" (UID: \"1d74d19d-7487-4df7-9086-25d448c200ac\") " pod="openstack-operators/telemetry-operator-controller-manager-589c568786-mwwrn" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.048425 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2dzm\" (UniqueName: \"kubernetes.io/projected/d28f5174-ffbd-475c-91ea-ea9d2776495a-kube-api-access-v2dzm\") pod \"placement-operator-controller-manager-8497b45c89-kd8sb\" (UID: \"d28f5174-ffbd-475c-91ea-ea9d2776495a\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-kd8sb" Feb 24 09:23:16 crc kubenswrapper[4822]: E0224 09:23:16.052856 4822 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 09:23:16 crc kubenswrapper[4822]: E0224 09:23:16.057500 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/884fa763-5a67-4fd5-a578-7c6d81245cb0-cert podName:884fa763-5a67-4fd5-a578-7c6d81245cb0 nodeName:}" failed. No retries permitted until 2026-02-24 09:23:16.557455971 +0000 UTC m=+918.945218519 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/884fa763-5a67-4fd5-a578-7c6d81245cb0-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cpzxhl" (UID: "884fa763-5a67-4fd5-a578-7c6d81245cb0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.088756 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvjxf\" (UniqueName: \"kubernetes.io/projected/884fa763-5a67-4fd5-a578-7c6d81245cb0-kube-api-access-jvjxf\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpzxhl\" (UID: \"884fa763-5a67-4fd5-a578-7c6d81245cb0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpzxhl" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.091509 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfd66\" (UniqueName: \"kubernetes.io/projected/f0441fa9-b0f9-4b7a-ab83-4b8262352c84-kube-api-access-tfd66\") pod \"test-operator-controller-manager-5dc6794d5b-gg8lt\" (UID: \"f0441fa9-b0f9-4b7a-ab83-4b8262352c84\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-gg8lt" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.092412 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2dzm\" (UniqueName: \"kubernetes.io/projected/d28f5174-ffbd-475c-91ea-ea9d2776495a-kube-api-access-v2dzm\") pod \"placement-operator-controller-manager-8497b45c89-kd8sb\" (UID: \"d28f5174-ffbd-475c-91ea-ea9d2776495a\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-kd8sb" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.094204 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-7rmlj" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.115844 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gqnp9" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.116900 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-mslnj" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.121164 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbjp8\" (UniqueName: \"kubernetes.io/projected/9f3fa61d-3144-4ae6-842d-81b2cdd5476e-kube-api-access-zbjp8\") pod \"octavia-operator-controller-manager-659dc6bbfc-67bxj\" (UID: \"9f3fa61d-3144-4ae6-842d-81b2cdd5476e\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-67bxj" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.122246 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b99q2\" (UniqueName: \"kubernetes.io/projected/1d74d19d-7487-4df7-9086-25d448c200ac-kube-api-access-b99q2\") pod \"telemetry-operator-controller-manager-589c568786-mwwrn\" (UID: \"1d74d19d-7487-4df7-9086-25d448c200ac\") " pod="openstack-operators/telemetry-operator-controller-manager-589c568786-mwwrn" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.123280 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg5px\" (UniqueName: \"kubernetes.io/projected/1e6e9c86-431e-4467-85a6-57c089a101f7-kube-api-access-tg5px\") pod \"swift-operator-controller-manager-68f46476f-7xgt4\" (UID: \"1e6e9c86-431e-4467-85a6-57c089a101f7\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-7xgt4" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.127820 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-224l9\" (UniqueName: \"kubernetes.io/projected/fadde01c-c670-4a09-b0af-35c60b5542f2-kube-api-access-224l9\") pod \"ovn-operator-controller-manager-5955d8c787-68ppf\" (UID: \"fadde01c-c670-4a09-b0af-35c60b5542f2\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-68ppf" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.127896 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5d9b665c78-8n22w"] Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.140356 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jg4cd"] Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.141693 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jg4cd" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.143509 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-7tv7m" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.149081 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-metrics-certs\") pod \"openstack-operator-controller-manager-5d9b665c78-8n22w\" (UID: \"be395298-93e0-41e3-ac46-9f9d28cbb124\") " pod="openstack-operators/openstack-operator-controller-manager-5d9b665c78-8n22w" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.149141 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-webhook-certs\") pod \"openstack-operator-controller-manager-5d9b665c78-8n22w\" (UID: \"be395298-93e0-41e3-ac46-9f9d28cbb124\") " pod="openstack-operators/openstack-operator-controller-manager-5d9b665c78-8n22w" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.149184 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5v2l\" (UniqueName: \"kubernetes.io/projected/be395298-93e0-41e3-ac46-9f9d28cbb124-kube-api-access-g5v2l\") pod \"openstack-operator-controller-manager-5d9b665c78-8n22w\" (UID: \"be395298-93e0-41e3-ac46-9f9d28cbb124\") " pod="openstack-operators/openstack-operator-controller-manager-5d9b665c78-8n22w" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.149245 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkvtt\" (UniqueName: \"kubernetes.io/projected/b6fe59fa-cfd1-4a99-9c18-3af06fb6a1bf-kube-api-access-tkvtt\") pod \"watcher-operator-controller-manager-bccc79885-vb9w9\" (UID: \"b6fe59fa-cfd1-4a99-9c18-3af06fb6a1bf\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-vb9w9" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.149778 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jg4cd"] Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.151399 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-68ppf" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.152803 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wttq9" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.176981 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-67bxj" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.225570 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-kd8sb" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.251195 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w74nt\" (UniqueName: \"kubernetes.io/projected/c086a8e6-c5d2-4858-ab0e-deebe6574e75-kube-api-access-w74nt\") pod \"rabbitmq-cluster-operator-manager-668c99d594-jg4cd\" (UID: \"c086a8e6-c5d2-4858-ab0e-deebe6574e75\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jg4cd" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.251243 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkvtt\" (UniqueName: \"kubernetes.io/projected/b6fe59fa-cfd1-4a99-9c18-3af06fb6a1bf-kube-api-access-tkvtt\") pod \"watcher-operator-controller-manager-bccc79885-vb9w9\" (UID: \"b6fe59fa-cfd1-4a99-9c18-3af06fb6a1bf\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-vb9w9" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.251268 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-metrics-certs\") pod \"openstack-operator-controller-manager-5d9b665c78-8n22w\" (UID: \"be395298-93e0-41e3-ac46-9f9d28cbb124\") " pod="openstack-operators/openstack-operator-controller-manager-5d9b665c78-8n22w" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.251310 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-webhook-certs\") pod \"openstack-operator-controller-manager-5d9b665c78-8n22w\" (UID: \"be395298-93e0-41e3-ac46-9f9d28cbb124\") " pod="openstack-operators/openstack-operator-controller-manager-5d9b665c78-8n22w" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.251353 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5v2l\" (UniqueName: \"kubernetes.io/projected/be395298-93e0-41e3-ac46-9f9d28cbb124-kube-api-access-g5v2l\") pod \"openstack-operator-controller-manager-5d9b665c78-8n22w\" (UID: \"be395298-93e0-41e3-ac46-9f9d28cbb124\") " pod="openstack-operators/openstack-operator-controller-manager-5d9b665c78-8n22w" Feb 24 09:23:16 crc kubenswrapper[4822]: E0224 09:23:16.251792 4822 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 24 09:23:16 crc kubenswrapper[4822]: E0224 09:23:16.251829 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-metrics-certs podName:be395298-93e0-41e3-ac46-9f9d28cbb124 nodeName:}" failed. No retries permitted until 2026-02-24 09:23:16.751817117 +0000 UTC m=+919.139579665 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-metrics-certs") pod "openstack-operator-controller-manager-5d9b665c78-8n22w" (UID: "be395298-93e0-41e3-ac46-9f9d28cbb124") : secret "metrics-server-cert" not found Feb 24 09:23:16 crc kubenswrapper[4822]: E0224 09:23:16.251948 4822 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 24 09:23:16 crc kubenswrapper[4822]: E0224 09:23:16.252025 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-webhook-certs podName:be395298-93e0-41e3-ac46-9f9d28cbb124 nodeName:}" failed. No retries permitted until 2026-02-24 09:23:16.752002032 +0000 UTC m=+919.139764580 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-webhook-certs") pod "openstack-operator-controller-manager-5d9b665c78-8n22w" (UID: "be395298-93e0-41e3-ac46-9f9d28cbb124") : secret "webhook-server-cert" not found Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.254929 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-7xgt4" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.272384 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5v2l\" (UniqueName: \"kubernetes.io/projected/be395298-93e0-41e3-ac46-9f9d28cbb124-kube-api-access-g5v2l\") pod \"openstack-operator-controller-manager-5d9b665c78-8n22w\" (UID: \"be395298-93e0-41e3-ac46-9f9d28cbb124\") " pod="openstack-operators/openstack-operator-controller-manager-5d9b665c78-8n22w" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.272816 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkvtt\" (UniqueName: \"kubernetes.io/projected/b6fe59fa-cfd1-4a99-9c18-3af06fb6a1bf-kube-api-access-tkvtt\") pod \"watcher-operator-controller-manager-bccc79885-vb9w9\" (UID: \"b6fe59fa-cfd1-4a99-9c18-3af06fb6a1bf\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-vb9w9" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.286001 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-mwwrn" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.315844 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-gg8lt" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.341743 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-vb9w9" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.352139 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d1da3e72-10c4-40c4-8add-a162123dc693-cert\") pod \"infra-operator-controller-manager-79d975b745-8bp8c\" (UID: \"d1da3e72-10c4-40c4-8add-a162123dc693\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-8bp8c" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.352229 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w74nt\" (UniqueName: \"kubernetes.io/projected/c086a8e6-c5d2-4858-ab0e-deebe6574e75-kube-api-access-w74nt\") pod \"rabbitmq-cluster-operator-manager-668c99d594-jg4cd\" (UID: \"c086a8e6-c5d2-4858-ab0e-deebe6574e75\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jg4cd" Feb 24 09:23:16 crc kubenswrapper[4822]: E0224 09:23:16.352468 4822 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 24 09:23:16 crc kubenswrapper[4822]: E0224 09:23:16.352530 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1da3e72-10c4-40c4-8add-a162123dc693-cert podName:d1da3e72-10c4-40c4-8add-a162123dc693 nodeName:}" failed. No retries permitted until 2026-02-24 09:23:17.352511251 +0000 UTC m=+919.740273869 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d1da3e72-10c4-40c4-8add-a162123dc693-cert") pod "infra-operator-controller-manager-79d975b745-8bp8c" (UID: "d1da3e72-10c4-40c4-8add-a162123dc693") : secret "infra-operator-webhook-server-cert" not found Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.383628 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w74nt\" (UniqueName: \"kubernetes.io/projected/c086a8e6-c5d2-4858-ab0e-deebe6574e75-kube-api-access-w74nt\") pod \"rabbitmq-cluster-operator-manager-668c99d594-jg4cd\" (UID: \"c086a8e6-c5d2-4858-ab0e-deebe6574e75\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jg4cd" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.409975 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-xlfx2"] Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.490933 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jg4cd" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.528407 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-hzf8q"] Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.535894 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-z6hzm"] Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.554981 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-wpr57"] Feb 24 09:23:16 crc kubenswrapper[4822]: W0224 09:23:16.576924 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fef6561_ad82_4d3d_ba59_cd7140dd9f05.slice/crio-fcf7c38f40b1cabd8d03e05094ab624fb381a908d9c643724994988b4577a641 WatchSource:0}: Error finding container fcf7c38f40b1cabd8d03e05094ab624fb381a908d9c643724994988b4577a641: Status 404 returned error can't find the container with id fcf7c38f40b1cabd8d03e05094ab624fb381a908d9c643724994988b4577a641 Feb 24 09:23:16 crc kubenswrapper[4822]: W0224 09:23:16.578154 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae2eb0b5_cb1a_4716_8680_5588a4cc06c2.slice/crio-baac5ccfd9a3cbbe2369fe76a848a9e64edd120f3d06bd0997a4217cc7ba9a9a WatchSource:0}: Error finding container baac5ccfd9a3cbbe2369fe76a848a9e64edd120f3d06bd0997a4217cc7ba9a9a: Status 404 returned error can't find the container with id baac5ccfd9a3cbbe2369fe76a848a9e64edd120f3d06bd0997a4217cc7ba9a9a Feb 24 09:23:16 crc kubenswrapper[4822]: W0224 09:23:16.579700 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee4db1d0_39bb_4442_b59c_f355df63cca5.slice/crio-e38c40f0d464e4e00dcb07f69f103b913c328d6262a1ba289da4895901d71777 WatchSource:0}: Error finding container e38c40f0d464e4e00dcb07f69f103b913c328d6262a1ba289da4895901d71777: Status 404 returned error can't find the container with id e38c40f0d464e4e00dcb07f69f103b913c328d6262a1ba289da4895901d71777 Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.634638 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-n6x49"] Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.656119 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/884fa763-5a67-4fd5-a578-7c6d81245cb0-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpzxhl\" (UID: \"884fa763-5a67-4fd5-a578-7c6d81245cb0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpzxhl" Feb 24 09:23:16 crc kubenswrapper[4822]: E0224 09:23:16.656593 4822 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 09:23:16 crc kubenswrapper[4822]: E0224 09:23:16.656661 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/884fa763-5a67-4fd5-a578-7c6d81245cb0-cert podName:884fa763-5a67-4fd5-a578-7c6d81245cb0 nodeName:}" failed. No retries permitted until 2026-02-24 09:23:17.656644132 +0000 UTC m=+920.044406680 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/884fa763-5a67-4fd5-a578-7c6d81245cb0-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cpzxhl" (UID: "884fa763-5a67-4fd5-a578-7c6d81245cb0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.664658 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-h5c46"] Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.757811 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-metrics-certs\") pod \"openstack-operator-controller-manager-5d9b665c78-8n22w\" (UID: \"be395298-93e0-41e3-ac46-9f9d28cbb124\") " pod="openstack-operators/openstack-operator-controller-manager-5d9b665c78-8n22w" Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.757902 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-webhook-certs\") pod \"openstack-operator-controller-manager-5d9b665c78-8n22w\" (UID: \"be395298-93e0-41e3-ac46-9f9d28cbb124\") " pod="openstack-operators/openstack-operator-controller-manager-5d9b665c78-8n22w" Feb 24 09:23:16 crc kubenswrapper[4822]: E0224 09:23:16.758076 4822 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 24 09:23:16 crc kubenswrapper[4822]: E0224 09:23:16.758126 4822 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 24 09:23:16 crc kubenswrapper[4822]: E0224 09:23:16.758145 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-webhook-certs podName:be395298-93e0-41e3-ac46-9f9d28cbb124 nodeName:}" failed. No retries permitted until 2026-02-24 09:23:17.758127928 +0000 UTC m=+920.145890496 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-webhook-certs") pod "openstack-operator-controller-manager-5d9b665c78-8n22w" (UID: "be395298-93e0-41e3-ac46-9f9d28cbb124") : secret "webhook-server-cert" not found Feb 24 09:23:16 crc kubenswrapper[4822]: E0224 09:23:16.758226 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-metrics-certs podName:be395298-93e0-41e3-ac46-9f9d28cbb124 nodeName:}" failed. No retries permitted until 2026-02-24 09:23:17.75820885 +0000 UTC m=+920.145971398 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-metrics-certs") pod "openstack-operator-controller-manager-5d9b665c78-8n22w" (UID: "be395298-93e0-41e3-ac46-9f9d28cbb124") : secret "metrics-server-cert" not found Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.853031 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-ttdt9"] Feb 24 09:23:16 crc kubenswrapper[4822]: W0224 09:23:16.853521 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d1d716a_d289_4282_af6f_ce4f6965f566.slice/crio-9e1d1379e76af7d4624ed0644820a7ac09dcd7fa9272031e912f5a7d4703bb56 WatchSource:0}: Error finding container 9e1d1379e76af7d4624ed0644820a7ac09dcd7fa9272031e912f5a7d4703bb56: Status 404 returned error can't find the container with id 9e1d1379e76af7d4624ed0644820a7ac09dcd7fa9272031e912f5a7d4703bb56 Feb 24 09:23:16 crc kubenswrapper[4822]: W0224 09:23:16.854839 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70026c66_cd4a_4c18_b215_25aa4b2ae4e3.slice/crio-c7239994a8d67b081bc6b16849650b16b52da75c5e056e886dc870c0c3662358 WatchSource:0}: Error finding container c7239994a8d67b081bc6b16849650b16b52da75c5e056e886dc870c0c3662358: Status 404 returned error can't find the container with id c7239994a8d67b081bc6b16849650b16b52da75c5e056e886dc870c0c3662358 Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.859670 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-7rmlj"] Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.873464 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hzf8q" event={"ID":"ae2eb0b5-cb1a-4716-8680-5588a4cc06c2","Type":"ContainerStarted","Data":"baac5ccfd9a3cbbe2369fe76a848a9e64edd120f3d06bd0997a4217cc7ba9a9a"} Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.878416 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-z6hzm" event={"ID":"ee4db1d0-39bb-4442-b59c-f355df63cca5","Type":"ContainerStarted","Data":"e38c40f0d464e4e00dcb07f69f103b913c328d6262a1ba289da4895901d71777"} Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.883102 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-xlfx2" event={"ID":"83e243fe-42cf-4cfb-887f-eb7ce40b8acc","Type":"ContainerStarted","Data":"3ef3af5996f1d0804de894e564933062d9afd6d188a1796cf4394e720ee88521"} Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.885811 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-ttdt9" event={"ID":"70026c66-cd4a-4c18-b215-25aa4b2ae4e3","Type":"ContainerStarted","Data":"c7239994a8d67b081bc6b16849650b16b52da75c5e056e886dc870c0c3662358"} Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.886590 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-7rmlj" event={"ID":"7d1d716a-d289-4282-af6f-ce4f6965f566","Type":"ContainerStarted","Data":"9e1d1379e76af7d4624ed0644820a7ac09dcd7fa9272031e912f5a7d4703bb56"} Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.891503 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-n6x49" event={"ID":"63991af3-d322-4671-811d-f48f063a77a3","Type":"ContainerStarted","Data":"e3daed4b690e2d7e4c792d90cf4ad2bf14df3249c2c25557d30ed6353c8fc957"} Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.892957 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-wpr57" event={"ID":"3fef6561-ad82-4d3d-ba59-cd7140dd9f05","Type":"ContainerStarted","Data":"fcf7c38f40b1cabd8d03e05094ab624fb381a908d9c643724994988b4577a641"} Feb 24 09:23:16 crc kubenswrapper[4822]: I0224 09:23:16.894114 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-h5c46" event={"ID":"c6225e3b-5166-4d5e-b0a8-03c74b182182","Type":"ContainerStarted","Data":"f2454e1341f4c6e36f3bb837395eb2da17e7dc88e4cf323e8e7f1d9516406400"} Feb 24 09:23:17 crc kubenswrapper[4822]: I0224 09:23:17.160734 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-gqnp9"] Feb 24 09:23:17 crc kubenswrapper[4822]: I0224 09:23:17.167221 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-wttq9"] Feb 24 09:23:17 crc kubenswrapper[4822]: I0224 09:23:17.183285 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-vb9w9"] Feb 24 09:23:17 crc kubenswrapper[4822]: I0224 09:23:17.190577 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-mslnj"] Feb 24 09:23:17 crc kubenswrapper[4822]: I0224 09:23:17.198733 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-gpdd5"] Feb 24 09:23:17 crc kubenswrapper[4822]: W0224 09:23:17.207961 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b5b31b6_1f4a_494e_8050_69c1a5bd4ba1.slice/crio-c506a0ddd2696884364d16d923c39821ce1fa41da4eb9ee75e871cdf5eb66456 WatchSource:0}: Error finding container c506a0ddd2696884364d16d923c39821ce1fa41da4eb9ee75e871cdf5eb66456: Status 404 returned error can't find the container with id c506a0ddd2696884364d16d923c39821ce1fa41da4eb9ee75e871cdf5eb66456 Feb 24 09:23:17 crc kubenswrapper[4822]: E0224 09:23:17.223602 4822 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tpqln,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-wttq9_openstack-operators(fe0d5f37-87cd-4f29-97c5-9253686da873): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 24 09:23:17 crc kubenswrapper[4822]: W0224 09:23:17.223881 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d74d19d_7487_4df7_9086_25d448c200ac.slice/crio-e84cd1abe4a7e70e0df967474226bc56523fc7a3a9bf307c6d16a333db7965c1 WatchSource:0}: Error finding container e84cd1abe4a7e70e0df967474226bc56523fc7a3a9bf307c6d16a333db7965c1: Status 404 returned error can't find the container with id e84cd1abe4a7e70e0df967474226bc56523fc7a3a9bf307c6d16a333db7965c1 Feb 24 09:23:17 crc kubenswrapper[4822]: E0224 09:23:17.225140 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wttq9" podUID="fe0d5f37-87cd-4f29-97c5-9253686da873" Feb 24 09:23:17 crc kubenswrapper[4822]: I0224 09:23:17.229122 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-589c568786-mwwrn"] Feb 24 09:23:17 crc kubenswrapper[4822]: E0224 09:23:17.231563 4822 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:f4143497c70c048a7733c284060347a0c74ef4e628aca22ee191e5bc9e4c7192,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-224l9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-5955d8c787-68ppf_openstack-operators(fadde01c-c670-4a09-b0af-35c60b5542f2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 24 09:23:17 crc kubenswrapper[4822]: E0224 09:23:17.232789 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-68ppf" podUID="fadde01c-c670-4a09-b0af-35c60b5542f2" Feb 24 09:23:17 crc kubenswrapper[4822]: I0224 09:23:17.238585 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-kd8sb"] Feb 24 09:23:17 crc kubenswrapper[4822]: E0224 09:23:17.242841 4822 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tfd66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5dc6794d5b-gg8lt_openstack-operators(f0441fa9-b0f9-4b7a-ab83-4b8262352c84): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 24 09:23:17 crc kubenswrapper[4822]: W0224 09:23:17.242997 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f3fa61d_3144_4ae6_842d_81b2cdd5476e.slice/crio-6e708024a0ffa0c28a16a9d888f56a82352ca6022a3d2338d79773b7d2555dab WatchSource:0}: Error finding container 6e708024a0ffa0c28a16a9d888f56a82352ca6022a3d2338d79773b7d2555dab: Status 404 returned error can't find the container with id 6e708024a0ffa0c28a16a9d888f56a82352ca6022a3d2338d79773b7d2555dab Feb 24 09:23:17 crc kubenswrapper[4822]: E0224 09:23:17.244144 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-gg8lt" podUID="f0441fa9-b0f9-4b7a-ab83-4b8262352c84" Feb 24 09:23:17 crc kubenswrapper[4822]: I0224 09:23:17.245680 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-68ppf"] Feb 24 09:23:17 crc kubenswrapper[4822]: E0224 09:23:17.246004 4822 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:4eb8fab5530a08915d3ab3e11e2808aeae16c8a220ed34ee04a186b2ae2303dc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b99q2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-589c568786-mwwrn_openstack-operators(1d74d19d-7487-4df7-9086-25d448c200ac): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 24 09:23:17 crc kubenswrapper[4822]: E0224 09:23:17.247809 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-mwwrn" podUID="1d74d19d-7487-4df7-9086-25d448c200ac" Feb 24 09:23:17 crc kubenswrapper[4822]: E0224 09:23:17.249185 4822 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:c7c7d4228994efb8b93cfabe4d78b40b085d91848dc49db247b7bbca689dae06,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zbjp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-659dc6bbfc-67bxj_openstack-operators(9f3fa61d-3144-4ae6-842d-81b2cdd5476e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 24 09:23:17 crc kubenswrapper[4822]: E0224 09:23:17.250516 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-67bxj" podUID="9f3fa61d-3144-4ae6-842d-81b2cdd5476e" Feb 24 09:23:17 crc kubenswrapper[4822]: I0224 09:23:17.254038 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-67bxj"] Feb 24 09:23:17 crc kubenswrapper[4822]: I0224 09:23:17.258979 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-7xgt4"] Feb 24 09:23:17 crc kubenswrapper[4822]: I0224 09:23:17.268103 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-gg8lt"] Feb 24 09:23:17 crc kubenswrapper[4822]: E0224 09:23:17.268139 4822 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w74nt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-jg4cd_openstack-operators(c086a8e6-c5d2-4858-ab0e-deebe6574e75): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 24 09:23:17 crc kubenswrapper[4822]: E0224 09:23:17.269777 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jg4cd" podUID="c086a8e6-c5d2-4858-ab0e-deebe6574e75" Feb 24 09:23:17 crc kubenswrapper[4822]: I0224 09:23:17.290222 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jg4cd"] Feb 24 09:23:17 crc kubenswrapper[4822]: I0224 09:23:17.364458 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d1da3e72-10c4-40c4-8add-a162123dc693-cert\") pod \"infra-operator-controller-manager-79d975b745-8bp8c\" (UID: \"d1da3e72-10c4-40c4-8add-a162123dc693\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-8bp8c" Feb 24 09:23:17 crc kubenswrapper[4822]: E0224 09:23:17.364641 4822 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 24 09:23:17 crc kubenswrapper[4822]: E0224 09:23:17.364681 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1da3e72-10c4-40c4-8add-a162123dc693-cert podName:d1da3e72-10c4-40c4-8add-a162123dc693 nodeName:}" failed. No retries permitted until 2026-02-24 09:23:19.36466753 +0000 UTC m=+921.752430068 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d1da3e72-10c4-40c4-8add-a162123dc693-cert") pod "infra-operator-controller-manager-79d975b745-8bp8c" (UID: "d1da3e72-10c4-40c4-8add-a162123dc693") : secret "infra-operator-webhook-server-cert" not found Feb 24 09:23:17 crc kubenswrapper[4822]: I0224 09:23:17.668031 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/884fa763-5a67-4fd5-a578-7c6d81245cb0-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpzxhl\" (UID: \"884fa763-5a67-4fd5-a578-7c6d81245cb0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpzxhl" Feb 24 09:23:17 crc kubenswrapper[4822]: E0224 09:23:17.670389 4822 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 09:23:17 crc kubenswrapper[4822]: E0224 09:23:17.670531 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/884fa763-5a67-4fd5-a578-7c6d81245cb0-cert podName:884fa763-5a67-4fd5-a578-7c6d81245cb0 nodeName:}" failed. No retries permitted until 2026-02-24 09:23:19.670469456 +0000 UTC m=+922.058232004 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/884fa763-5a67-4fd5-a578-7c6d81245cb0-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cpzxhl" (UID: "884fa763-5a67-4fd5-a578-7c6d81245cb0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 09:23:17 crc kubenswrapper[4822]: I0224 09:23:17.772421 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-metrics-certs\") pod \"openstack-operator-controller-manager-5d9b665c78-8n22w\" (UID: \"be395298-93e0-41e3-ac46-9f9d28cbb124\") " pod="openstack-operators/openstack-operator-controller-manager-5d9b665c78-8n22w" Feb 24 09:23:17 crc kubenswrapper[4822]: I0224 09:23:17.772495 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-webhook-certs\") pod \"openstack-operator-controller-manager-5d9b665c78-8n22w\" (UID: \"be395298-93e0-41e3-ac46-9f9d28cbb124\") " pod="openstack-operators/openstack-operator-controller-manager-5d9b665c78-8n22w" Feb 24 09:23:17 crc kubenswrapper[4822]: E0224 09:23:17.772541 4822 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 24 09:23:17 crc kubenswrapper[4822]: E0224 09:23:17.772613 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-metrics-certs podName:be395298-93e0-41e3-ac46-9f9d28cbb124 nodeName:}" failed. No retries permitted until 2026-02-24 09:23:19.77258947 +0000 UTC m=+922.160352018 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-metrics-certs") pod "openstack-operator-controller-manager-5d9b665c78-8n22w" (UID: "be395298-93e0-41e3-ac46-9f9d28cbb124") : secret "metrics-server-cert" not found Feb 24 09:23:17 crc kubenswrapper[4822]: E0224 09:23:17.772649 4822 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 24 09:23:17 crc kubenswrapper[4822]: E0224 09:23:17.772736 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-webhook-certs podName:be395298-93e0-41e3-ac46-9f9d28cbb124 nodeName:}" failed. No retries permitted until 2026-02-24 09:23:19.772713773 +0000 UTC m=+922.160476341 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-webhook-certs") pod "openstack-operator-controller-manager-5d9b665c78-8n22w" (UID: "be395298-93e0-41e3-ac46-9f9d28cbb124") : secret "webhook-server-cert" not found Feb 24 09:23:17 crc kubenswrapper[4822]: I0224 09:23:17.907864 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-gpdd5" event={"ID":"5b5b31b6-1f4a-494e-8050-69c1a5bd4ba1","Type":"ContainerStarted","Data":"c506a0ddd2696884364d16d923c39821ce1fa41da4eb9ee75e871cdf5eb66456"} Feb 24 09:23:17 crc kubenswrapper[4822]: I0224 09:23:17.910402 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-7xgt4" event={"ID":"1e6e9c86-431e-4467-85a6-57c089a101f7","Type":"ContainerStarted","Data":"f0b2b937426daa4dc799f0fe7ce519216e947bfe7e66120f3d09ba3c829c9e73"} Feb 24 09:23:17 crc kubenswrapper[4822]: I0224 09:23:17.912097 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-68ppf" event={"ID":"fadde01c-c670-4a09-b0af-35c60b5542f2","Type":"ContainerStarted","Data":"16bd4284c1b53181df1e4ac464139b82e401aafdcf8ae2656d27d6cf85771d86"} Feb 24 09:23:17 crc kubenswrapper[4822]: E0224 09:23:17.913377 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:f4143497c70c048a7733c284060347a0c74ef4e628aca22ee191e5bc9e4c7192\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-68ppf" podUID="fadde01c-c670-4a09-b0af-35c60b5542f2" Feb 24 09:23:17 crc kubenswrapper[4822]: I0224 09:23:17.913568 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-67bxj" event={"ID":"9f3fa61d-3144-4ae6-842d-81b2cdd5476e","Type":"ContainerStarted","Data":"6e708024a0ffa0c28a16a9d888f56a82352ca6022a3d2338d79773b7d2555dab"} Feb 24 09:23:17 crc kubenswrapper[4822]: E0224 09:23:17.915033 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:c7c7d4228994efb8b93cfabe4d78b40b085d91848dc49db247b7bbca689dae06\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-67bxj" podUID="9f3fa61d-3144-4ae6-842d-81b2cdd5476e" Feb 24 09:23:17 crc kubenswrapper[4822]: I0224 09:23:17.916526 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-gg8lt" event={"ID":"f0441fa9-b0f9-4b7a-ab83-4b8262352c84","Type":"ContainerStarted","Data":"102a145e61801d02cb37d733cb8a0ef876e13a35209300903b5e30ed29dde8a3"} Feb 24 09:23:17 crc kubenswrapper[4822]: E0224 09:23:17.917566 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98\\\"\"" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-gg8lt" podUID="f0441fa9-b0f9-4b7a-ab83-4b8262352c84" Feb 24 09:23:17 crc kubenswrapper[4822]: I0224 09:23:17.923138 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-mslnj" event={"ID":"fd4ef780-76b3-48b1-8408-d92bb8d55566","Type":"ContainerStarted","Data":"5115ca970f88c9b1450a0248510528ce114bfc745ad1dc4c404ebcc04571e9e5"} Feb 24 09:23:17 crc kubenswrapper[4822]: I0224 09:23:17.935365 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jg4cd" event={"ID":"c086a8e6-c5d2-4858-ab0e-deebe6574e75","Type":"ContainerStarted","Data":"f2b9d0305e37fbaec24fb3cf20a82ea61716d655b01dd6843a9d03b020443785"} Feb 24 09:23:17 crc kubenswrapper[4822]: E0224 09:23:17.938758 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jg4cd" podUID="c086a8e6-c5d2-4858-ab0e-deebe6574e75" Feb 24 09:23:17 crc kubenswrapper[4822]: I0224 09:23:17.945187 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-vb9w9" event={"ID":"b6fe59fa-cfd1-4a99-9c18-3af06fb6a1bf","Type":"ContainerStarted","Data":"6b863a5cf8dc28ce68511f064566b1a09772386630669f6dde1d83db985522f3"} Feb 24 09:23:17 crc kubenswrapper[4822]: I0224 09:23:17.948999 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gqnp9" event={"ID":"ec52c5d9-7734-4cd8-90c7-2df3617bb887","Type":"ContainerStarted","Data":"0c49ed756448a3c7ba743b3d7c8ec80c7d50ccf85a861bcba32ced189da156f3"} Feb 24 09:23:17 crc kubenswrapper[4822]: I0224 09:23:17.954798 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-kd8sb" event={"ID":"d28f5174-ffbd-475c-91ea-ea9d2776495a","Type":"ContainerStarted","Data":"e7574e358f5801b3f206b94150e2d258e00a567d6e69ce550ccae0fbfae8d8e2"} Feb 24 09:23:17 crc kubenswrapper[4822]: I0224 09:23:17.962120 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wttq9" event={"ID":"fe0d5f37-87cd-4f29-97c5-9253686da873","Type":"ContainerStarted","Data":"70ce3e74f19cafe3567147ab0ef68b4af759b2dd6c524b8f26cc29b9eb4151f3"} Feb 24 09:23:17 crc kubenswrapper[4822]: E0224 09:23:17.964424 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wttq9" podUID="fe0d5f37-87cd-4f29-97c5-9253686da873" Feb 24 09:23:17 crc kubenswrapper[4822]: I0224 09:23:17.969042 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-mwwrn" event={"ID":"1d74d19d-7487-4df7-9086-25d448c200ac","Type":"ContainerStarted","Data":"e84cd1abe4a7e70e0df967474226bc56523fc7a3a9bf307c6d16a333db7965c1"} Feb 24 09:23:17 crc kubenswrapper[4822]: E0224 09:23:17.970894 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:4eb8fab5530a08915d3ab3e11e2808aeae16c8a220ed34ee04a186b2ae2303dc\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-mwwrn" podUID="1d74d19d-7487-4df7-9086-25d448c200ac" Feb 24 09:23:18 crc kubenswrapper[4822]: E0224 09:23:18.982132 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jg4cd" podUID="c086a8e6-c5d2-4858-ab0e-deebe6574e75" Feb 24 09:23:18 crc kubenswrapper[4822]: E0224 09:23:18.982274 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:4eb8fab5530a08915d3ab3e11e2808aeae16c8a220ed34ee04a186b2ae2303dc\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-mwwrn" podUID="1d74d19d-7487-4df7-9086-25d448c200ac" Feb 24 09:23:18 crc kubenswrapper[4822]: E0224 09:23:18.982324 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:f4143497c70c048a7733c284060347a0c74ef4e628aca22ee191e5bc9e4c7192\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-68ppf" podUID="fadde01c-c670-4a09-b0af-35c60b5542f2" Feb 24 09:23:18 crc kubenswrapper[4822]: E0224 09:23:18.982356 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:c7c7d4228994efb8b93cfabe4d78b40b085d91848dc49db247b7bbca689dae06\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-67bxj" podUID="9f3fa61d-3144-4ae6-842d-81b2cdd5476e" Feb 24 09:23:18 crc kubenswrapper[4822]: E0224 09:23:18.982365 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wttq9" podUID="fe0d5f37-87cd-4f29-97c5-9253686da873" Feb 24 09:23:18 crc kubenswrapper[4822]: E0224 09:23:18.982384 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98\\\"\"" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-gg8lt" podUID="f0441fa9-b0f9-4b7a-ab83-4b8262352c84" Feb 24 09:23:19 crc kubenswrapper[4822]: I0224 09:23:19.399310 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d1da3e72-10c4-40c4-8add-a162123dc693-cert\") pod \"infra-operator-controller-manager-79d975b745-8bp8c\" (UID: \"d1da3e72-10c4-40c4-8add-a162123dc693\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-8bp8c" Feb 24 09:23:19 crc kubenswrapper[4822]: E0224 09:23:19.399581 4822 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 24 09:23:19 crc kubenswrapper[4822]: E0224 09:23:19.399638 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1da3e72-10c4-40c4-8add-a162123dc693-cert podName:d1da3e72-10c4-40c4-8add-a162123dc693 nodeName:}" failed. No retries permitted until 2026-02-24 09:23:23.39962093 +0000 UTC m=+925.787383478 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d1da3e72-10c4-40c4-8add-a162123dc693-cert") pod "infra-operator-controller-manager-79d975b745-8bp8c" (UID: "d1da3e72-10c4-40c4-8add-a162123dc693") : secret "infra-operator-webhook-server-cert" not found Feb 24 09:23:19 crc kubenswrapper[4822]: I0224 09:23:19.702756 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/884fa763-5a67-4fd5-a578-7c6d81245cb0-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpzxhl\" (UID: \"884fa763-5a67-4fd5-a578-7c6d81245cb0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpzxhl" Feb 24 09:23:19 crc kubenswrapper[4822]: E0224 09:23:19.703000 4822 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 09:23:19 crc kubenswrapper[4822]: E0224 09:23:19.703089 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/884fa763-5a67-4fd5-a578-7c6d81245cb0-cert podName:884fa763-5a67-4fd5-a578-7c6d81245cb0 nodeName:}" failed. No retries permitted until 2026-02-24 09:23:23.703063141 +0000 UTC m=+926.090825699 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/884fa763-5a67-4fd5-a578-7c6d81245cb0-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cpzxhl" (UID: "884fa763-5a67-4fd5-a578-7c6d81245cb0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 09:23:19 crc kubenswrapper[4822]: I0224 09:23:19.803596 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-webhook-certs\") pod \"openstack-operator-controller-manager-5d9b665c78-8n22w\" (UID: \"be395298-93e0-41e3-ac46-9f9d28cbb124\") " pod="openstack-operators/openstack-operator-controller-manager-5d9b665c78-8n22w" Feb 24 09:23:19 crc kubenswrapper[4822]: I0224 09:23:19.803984 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-metrics-certs\") pod \"openstack-operator-controller-manager-5d9b665c78-8n22w\" (UID: \"be395298-93e0-41e3-ac46-9f9d28cbb124\") " pod="openstack-operators/openstack-operator-controller-manager-5d9b665c78-8n22w" Feb 24 09:23:19 crc kubenswrapper[4822]: E0224 09:23:19.803851 4822 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 24 09:23:19 crc kubenswrapper[4822]: E0224 09:23:19.804191 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-webhook-certs podName:be395298-93e0-41e3-ac46-9f9d28cbb124 nodeName:}" failed. No retries permitted until 2026-02-24 09:23:23.804176407 +0000 UTC m=+926.191938945 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-webhook-certs") pod "openstack-operator-controller-manager-5d9b665c78-8n22w" (UID: "be395298-93e0-41e3-ac46-9f9d28cbb124") : secret "webhook-server-cert" not found Feb 24 09:23:19 crc kubenswrapper[4822]: E0224 09:23:19.804139 4822 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 24 09:23:19 crc kubenswrapper[4822]: E0224 09:23:19.804233 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-metrics-certs podName:be395298-93e0-41e3-ac46-9f9d28cbb124 nodeName:}" failed. No retries permitted until 2026-02-24 09:23:23.804223428 +0000 UTC m=+926.191985976 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-metrics-certs") pod "openstack-operator-controller-manager-5d9b665c78-8n22w" (UID: "be395298-93e0-41e3-ac46-9f9d28cbb124") : secret "metrics-server-cert" not found Feb 24 09:23:23 crc kubenswrapper[4822]: I0224 09:23:23.458645 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d1da3e72-10c4-40c4-8add-a162123dc693-cert\") pod \"infra-operator-controller-manager-79d975b745-8bp8c\" (UID: \"d1da3e72-10c4-40c4-8add-a162123dc693\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-8bp8c" Feb 24 09:23:23 crc kubenswrapper[4822]: E0224 09:23:23.458751 4822 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 24 09:23:23 crc kubenswrapper[4822]: E0224 09:23:23.459337 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1da3e72-10c4-40c4-8add-a162123dc693-cert podName:d1da3e72-10c4-40c4-8add-a162123dc693 nodeName:}" failed. No retries permitted until 2026-02-24 09:23:31.459319309 +0000 UTC m=+933.847081857 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d1da3e72-10c4-40c4-8add-a162123dc693-cert") pod "infra-operator-controller-manager-79d975b745-8bp8c" (UID: "d1da3e72-10c4-40c4-8add-a162123dc693") : secret "infra-operator-webhook-server-cert" not found Feb 24 09:23:23 crc kubenswrapper[4822]: I0224 09:23:23.763397 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/884fa763-5a67-4fd5-a578-7c6d81245cb0-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpzxhl\" (UID: \"884fa763-5a67-4fd5-a578-7c6d81245cb0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpzxhl" Feb 24 09:23:23 crc kubenswrapper[4822]: E0224 09:23:23.763590 4822 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 09:23:23 crc kubenswrapper[4822]: E0224 09:23:23.763670 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/884fa763-5a67-4fd5-a578-7c6d81245cb0-cert podName:884fa763-5a67-4fd5-a578-7c6d81245cb0 nodeName:}" failed. No retries permitted until 2026-02-24 09:23:31.763649005 +0000 UTC m=+934.151411553 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/884fa763-5a67-4fd5-a578-7c6d81245cb0-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cpzxhl" (UID: "884fa763-5a67-4fd5-a578-7c6d81245cb0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 09:23:23 crc kubenswrapper[4822]: I0224 09:23:23.863971 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-webhook-certs\") pod \"openstack-operator-controller-manager-5d9b665c78-8n22w\" (UID: \"be395298-93e0-41e3-ac46-9f9d28cbb124\") " pod="openstack-operators/openstack-operator-controller-manager-5d9b665c78-8n22w" Feb 24 09:23:23 crc kubenswrapper[4822]: I0224 09:23:23.864092 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-metrics-certs\") pod \"openstack-operator-controller-manager-5d9b665c78-8n22w\" (UID: \"be395298-93e0-41e3-ac46-9f9d28cbb124\") " pod="openstack-operators/openstack-operator-controller-manager-5d9b665c78-8n22w" Feb 24 09:23:23 crc kubenswrapper[4822]: E0224 09:23:23.864149 4822 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 24 09:23:23 crc kubenswrapper[4822]: E0224 09:23:23.864174 4822 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 24 09:23:23 crc kubenswrapper[4822]: E0224 09:23:23.864209 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-webhook-certs podName:be395298-93e0-41e3-ac46-9f9d28cbb124 nodeName:}" failed. No retries permitted until 2026-02-24 09:23:31.864192415 +0000 UTC m=+934.251954963 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-webhook-certs") pod "openstack-operator-controller-manager-5d9b665c78-8n22w" (UID: "be395298-93e0-41e3-ac46-9f9d28cbb124") : secret "webhook-server-cert" not found Feb 24 09:23:23 crc kubenswrapper[4822]: E0224 09:23:23.864225 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-metrics-certs podName:be395298-93e0-41e3-ac46-9f9d28cbb124 nodeName:}" failed. No retries permitted until 2026-02-24 09:23:31.864217645 +0000 UTC m=+934.251980193 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-metrics-certs") pod "openstack-operator-controller-manager-5d9b665c78-8n22w" (UID: "be395298-93e0-41e3-ac46-9f9d28cbb124") : secret "metrics-server-cert" not found Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.065191 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-vb9w9" event={"ID":"b6fe59fa-cfd1-4a99-9c18-3af06fb6a1bf","Type":"ContainerStarted","Data":"112a4ec3ca42c0b39f659c30950bd7b2ce47fd430a00210d96180aa89f6f1e6b"} Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.065738 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-vb9w9" Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.066418 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-7xgt4" event={"ID":"1e6e9c86-431e-4467-85a6-57c089a101f7","Type":"ContainerStarted","Data":"6f312b3daa3c428794e4071de0474a712174634fff1a948b94c1aa6becafb994"} Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.067020 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-7xgt4" Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.068452 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-n6x49" event={"ID":"63991af3-d322-4671-811d-f48f063a77a3","Type":"ContainerStarted","Data":"ecb7c1475e7543ba144f2ae3fbd1ddab3c8df29179ebffb778992bdea36a4503"} Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.068734 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-67d996989d-n6x49" Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.070235 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-wpr57" event={"ID":"3fef6561-ad82-4d3d-ba59-cd7140dd9f05","Type":"ContainerStarted","Data":"93de21491dffdda4ec2f7f827ff926528bcce7d7a0b66fc233a48adb93007ee5"} Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.070356 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-wpr57" Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.071553 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-mslnj" event={"ID":"fd4ef780-76b3-48b1-8408-d92bb8d55566","Type":"ContainerStarted","Data":"f7108e270919de3068b1f02d9bf3246fea7f5187780c68e998a87e9d6a286356"} Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.071704 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-mslnj" Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.072947 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-xlfx2" event={"ID":"83e243fe-42cf-4cfb-887f-eb7ce40b8acc","Type":"ContainerStarted","Data":"dc7be4484a9cea5c1eb4fe136206de7fbedae2240a429ed6f96bbb533d24df0b"} Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.073087 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-xlfx2" Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.074166 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-7rmlj" event={"ID":"7d1d716a-d289-4282-af6f-ce4f6965f566","Type":"ContainerStarted","Data":"14e188fb1c964648404a1eb8e368fd822527050b1ff1c2527ebe307a4eee18c9"} Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.075092 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-gpdd5" event={"ID":"5b5b31b6-1f4a-494e-8050-69c1a5bd4ba1","Type":"ContainerStarted","Data":"df0f4301e1c42031060cf1b2190bae4f0b2a8db1b0a86e1a6db4e2a63f0bc404"} Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.075448 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-gpdd5" Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.076295 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-ttdt9" event={"ID":"70026c66-cd4a-4c18-b215-25aa4b2ae4e3","Type":"ContainerStarted","Data":"af84b218d9710716f2a35ba8312a27536b21fcffbf93b59b7c9540cee683288f"} Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.076611 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-ttdt9" Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.077397 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hzf8q" event={"ID":"ae2eb0b5-cb1a-4716-8680-5588a4cc06c2","Type":"ContainerStarted","Data":"c808849ffcc4045c3ee42f59671f4c312ff1712d3cfb414eaaec8853b2427b8a"} Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.077709 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hzf8q" Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.078477 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-z6hzm" event={"ID":"ee4db1d0-39bb-4442-b59c-f355df63cca5","Type":"ContainerStarted","Data":"26a4eea49423c09809e11f63d2a59aca38e3a9e53b56ef69208d1ed4b63be2db"} Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.078701 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-7rmlj" Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.080007 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gqnp9" event={"ID":"ec52c5d9-7734-4cd8-90c7-2df3617bb887","Type":"ContainerStarted","Data":"78a6dc6dffc627cc9105501d91554f312d2dda0f0654a4233908570698b40173"} Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.080370 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gqnp9" Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.083482 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-kd8sb" event={"ID":"d28f5174-ffbd-475c-91ea-ea9d2776495a","Type":"ContainerStarted","Data":"258fe5ff66da70f5f7003dabd775a8c3339e5b3f982a605b4c5778c287b5d0f4"} Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.083622 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-kd8sb" Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.084923 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-h5c46" event={"ID":"c6225e3b-5166-4d5e-b0a8-03c74b182182","Type":"ContainerStarted","Data":"293852da61403160271d4edcdc170f56ecd93c66412bac73736f33acf7ffe7fe"} Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.085593 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-h5c46" Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.115217 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-vb9w9" podStartSLOduration=3.458577553 podStartE2EDuration="15.115204437s" podCreationTimestamp="2026-02-24 09:23:15 +0000 UTC" firstStartedPulling="2026-02-24 09:23:17.212826731 +0000 UTC m=+919.600589289" lastFinishedPulling="2026-02-24 09:23:28.869453625 +0000 UTC m=+931.257216173" observedRunningTime="2026-02-24 09:23:30.110899118 +0000 UTC m=+932.498661666" watchObservedRunningTime="2026-02-24 09:23:30.115204437 +0000 UTC m=+932.502966985" Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.131934 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-z6hzm" podStartSLOduration=2.8789219900000003 podStartE2EDuration="15.131904196s" podCreationTimestamp="2026-02-24 09:23:15 +0000 UTC" firstStartedPulling="2026-02-24 09:23:16.584167712 +0000 UTC m=+918.971930260" lastFinishedPulling="2026-02-24 09:23:28.837149928 +0000 UTC m=+931.224912466" observedRunningTime="2026-02-24 09:23:30.128630186 +0000 UTC m=+932.516392734" watchObservedRunningTime="2026-02-24 09:23:30.131904196 +0000 UTC m=+932.519666744" Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.158153 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-gpdd5" podStartSLOduration=3.5418607509999998 podStartE2EDuration="15.158135275s" podCreationTimestamp="2026-02-24 09:23:15 +0000 UTC" firstStartedPulling="2026-02-24 09:23:17.219549677 +0000 UTC m=+919.607312225" lastFinishedPulling="2026-02-24 09:23:28.835824191 +0000 UTC m=+931.223586749" observedRunningTime="2026-02-24 09:23:30.154837775 +0000 UTC m=+932.542600343" watchObservedRunningTime="2026-02-24 09:23:30.158135275 +0000 UTC m=+932.545897823" Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.194351 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-wpr57" podStartSLOduration=2.912754467 podStartE2EDuration="15.194326259s" podCreationTimestamp="2026-02-24 09:23:15 +0000 UTC" firstStartedPulling="2026-02-24 09:23:16.580686166 +0000 UTC m=+918.968448714" lastFinishedPulling="2026-02-24 09:23:28.862257958 +0000 UTC m=+931.250020506" observedRunningTime="2026-02-24 09:23:30.187545713 +0000 UTC m=+932.575308261" watchObservedRunningTime="2026-02-24 09:23:30.194326259 +0000 UTC m=+932.582088807" Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.229366 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hzf8q" podStartSLOduration=2.979140331 podStartE2EDuration="15.229349891s" podCreationTimestamp="2026-02-24 09:23:15 +0000 UTC" firstStartedPulling="2026-02-24 09:23:16.580454899 +0000 UTC m=+918.968217447" lastFinishedPulling="2026-02-24 09:23:28.830664459 +0000 UTC m=+931.218427007" observedRunningTime="2026-02-24 09:23:30.226426871 +0000 UTC m=+932.614189419" watchObservedRunningTime="2026-02-24 09:23:30.229349891 +0000 UTC m=+932.617112439" Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.255846 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-7rmlj" podStartSLOduration=3.273847272 podStartE2EDuration="15.255830238s" podCreationTimestamp="2026-02-24 09:23:15 +0000 UTC" firstStartedPulling="2026-02-24 09:23:16.855404389 +0000 UTC m=+919.243166927" lastFinishedPulling="2026-02-24 09:23:28.837387345 +0000 UTC m=+931.225149893" observedRunningTime="2026-02-24 09:23:30.253655398 +0000 UTC m=+932.641417946" watchObservedRunningTime="2026-02-24 09:23:30.255830238 +0000 UTC m=+932.643592786" Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.331636 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-ttdt9" podStartSLOduration=3.354229128 podStartE2EDuration="15.331620418s" podCreationTimestamp="2026-02-24 09:23:15 +0000 UTC" firstStartedPulling="2026-02-24 09:23:16.858407441 +0000 UTC m=+919.246169989" lastFinishedPulling="2026-02-24 09:23:28.835798731 +0000 UTC m=+931.223561279" observedRunningTime="2026-02-24 09:23:30.294128939 +0000 UTC m=+932.681891477" watchObservedRunningTime="2026-02-24 09:23:30.331620418 +0000 UTC m=+932.719382966" Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.364848 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-xlfx2" podStartSLOduration=2.979041628 podStartE2EDuration="15.36483224s" podCreationTimestamp="2026-02-24 09:23:15 +0000 UTC" firstStartedPulling="2026-02-24 09:23:16.450017059 +0000 UTC m=+918.837779607" lastFinishedPulling="2026-02-24 09:23:28.835807671 +0000 UTC m=+931.223570219" observedRunningTime="2026-02-24 09:23:30.332302267 +0000 UTC m=+932.720064815" watchObservedRunningTime="2026-02-24 09:23:30.36483224 +0000 UTC m=+932.752594788" Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.366243 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-7xgt4" podStartSLOduration=3.785410756 podStartE2EDuration="15.366236308s" podCreationTimestamp="2026-02-24 09:23:15 +0000 UTC" firstStartedPulling="2026-02-24 09:23:17.254948358 +0000 UTC m=+919.642710906" lastFinishedPulling="2026-02-24 09:23:28.83577391 +0000 UTC m=+931.223536458" observedRunningTime="2026-02-24 09:23:30.362549687 +0000 UTC m=+932.750312236" watchObservedRunningTime="2026-02-24 09:23:30.366236308 +0000 UTC m=+932.753998856" Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.389711 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-mslnj" podStartSLOduration=3.779664789 podStartE2EDuration="15.389694713s" podCreationTimestamp="2026-02-24 09:23:15 +0000 UTC" firstStartedPulling="2026-02-24 09:23:17.219145545 +0000 UTC m=+919.606908083" lastFinishedPulling="2026-02-24 09:23:28.829175459 +0000 UTC m=+931.216938007" observedRunningTime="2026-02-24 09:23:30.38702041 +0000 UTC m=+932.774782948" watchObservedRunningTime="2026-02-24 09:23:30.389694713 +0000 UTC m=+932.777457261" Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.454521 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-67d996989d-n6x49" podStartSLOduration=3.278856879 podStartE2EDuration="15.454506572s" podCreationTimestamp="2026-02-24 09:23:15 +0000 UTC" firstStartedPulling="2026-02-24 09:23:16.654192124 +0000 UTC m=+919.041954672" lastFinishedPulling="2026-02-24 09:23:28.829841807 +0000 UTC m=+931.217604365" observedRunningTime="2026-02-24 09:23:30.413776354 +0000 UTC m=+932.801538902" watchObservedRunningTime="2026-02-24 09:23:30.454506572 +0000 UTC m=+932.842269120" Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.455631 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-h5c46" podStartSLOduration=3.279640811 podStartE2EDuration="15.455625663s" podCreationTimestamp="2026-02-24 09:23:15 +0000 UTC" firstStartedPulling="2026-02-24 09:23:16.689776321 +0000 UTC m=+919.077538869" lastFinishedPulling="2026-02-24 09:23:28.865761173 +0000 UTC m=+931.253523721" observedRunningTime="2026-02-24 09:23:30.451605993 +0000 UTC m=+932.839368541" watchObservedRunningTime="2026-02-24 09:23:30.455625663 +0000 UTC m=+932.843388211" Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.508609 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gqnp9" podStartSLOduration=3.773238412 podStartE2EDuration="15.508592007s" podCreationTimestamp="2026-02-24 09:23:15 +0000 UTC" firstStartedPulling="2026-02-24 09:23:17.205109329 +0000 UTC m=+919.592871887" lastFinishedPulling="2026-02-24 09:23:28.940462934 +0000 UTC m=+931.328225482" observedRunningTime="2026-02-24 09:23:30.479730865 +0000 UTC m=+932.867493413" watchObservedRunningTime="2026-02-24 09:23:30.508592007 +0000 UTC m=+932.896354555" Feb 24 09:23:30 crc kubenswrapper[4822]: I0224 09:23:30.509683 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-kd8sb" podStartSLOduration=3.8736531789999997 podStartE2EDuration="15.509679107s" podCreationTimestamp="2026-02-24 09:23:15 +0000 UTC" firstStartedPulling="2026-02-24 09:23:17.223261718 +0000 UTC m=+919.611024276" lastFinishedPulling="2026-02-24 09:23:28.859287656 +0000 UTC m=+931.247050204" observedRunningTime="2026-02-24 09:23:30.50579661 +0000 UTC m=+932.893559158" watchObservedRunningTime="2026-02-24 09:23:30.509679107 +0000 UTC m=+932.897441655" Feb 24 09:23:31 crc kubenswrapper[4822]: I0224 09:23:31.096078 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-z6hzm" Feb 24 09:23:31 crc kubenswrapper[4822]: I0224 09:23:31.478338 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d1da3e72-10c4-40c4-8add-a162123dc693-cert\") pod \"infra-operator-controller-manager-79d975b745-8bp8c\" (UID: \"d1da3e72-10c4-40c4-8add-a162123dc693\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-8bp8c" Feb 24 09:23:31 crc kubenswrapper[4822]: I0224 09:23:31.488571 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d1da3e72-10c4-40c4-8add-a162123dc693-cert\") pod \"infra-operator-controller-manager-79d975b745-8bp8c\" (UID: \"d1da3e72-10c4-40c4-8add-a162123dc693\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-8bp8c" Feb 24 09:23:31 crc kubenswrapper[4822]: I0224 09:23:31.696342 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-8bp8c" Feb 24 09:23:31 crc kubenswrapper[4822]: I0224 09:23:31.783353 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/884fa763-5a67-4fd5-a578-7c6d81245cb0-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpzxhl\" (UID: \"884fa763-5a67-4fd5-a578-7c6d81245cb0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpzxhl" Feb 24 09:23:31 crc kubenswrapper[4822]: E0224 09:23:31.783649 4822 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 09:23:31 crc kubenswrapper[4822]: E0224 09:23:31.783756 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/884fa763-5a67-4fd5-a578-7c6d81245cb0-cert podName:884fa763-5a67-4fd5-a578-7c6d81245cb0 nodeName:}" failed. No retries permitted until 2026-02-24 09:23:47.783728206 +0000 UTC m=+950.171490774 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/884fa763-5a67-4fd5-a578-7c6d81245cb0-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cpzxhl" (UID: "884fa763-5a67-4fd5-a578-7c6d81245cb0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 09:23:31 crc kubenswrapper[4822]: I0224 09:23:31.884590 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-metrics-certs\") pod \"openstack-operator-controller-manager-5d9b665c78-8n22w\" (UID: \"be395298-93e0-41e3-ac46-9f9d28cbb124\") " pod="openstack-operators/openstack-operator-controller-manager-5d9b665c78-8n22w" Feb 24 09:23:31 crc kubenswrapper[4822]: I0224 09:23:31.884739 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-webhook-certs\") pod \"openstack-operator-controller-manager-5d9b665c78-8n22w\" (UID: \"be395298-93e0-41e3-ac46-9f9d28cbb124\") " pod="openstack-operators/openstack-operator-controller-manager-5d9b665c78-8n22w" Feb 24 09:23:31 crc kubenswrapper[4822]: E0224 09:23:31.884749 4822 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 24 09:23:31 crc kubenswrapper[4822]: E0224 09:23:31.884824 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-metrics-certs podName:be395298-93e0-41e3-ac46-9f9d28cbb124 nodeName:}" failed. No retries permitted until 2026-02-24 09:23:47.884803182 +0000 UTC m=+950.272565730 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-metrics-certs") pod "openstack-operator-controller-manager-5d9b665c78-8n22w" (UID: "be395298-93e0-41e3-ac46-9f9d28cbb124") : secret "metrics-server-cert" not found Feb 24 09:23:31 crc kubenswrapper[4822]: E0224 09:23:31.884896 4822 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 24 09:23:31 crc kubenswrapper[4822]: E0224 09:23:31.885046 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-webhook-certs podName:be395298-93e0-41e3-ac46-9f9d28cbb124 nodeName:}" failed. No retries permitted until 2026-02-24 09:23:47.885019367 +0000 UTC m=+950.272781975 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-webhook-certs") pod "openstack-operator-controller-manager-5d9b665c78-8n22w" (UID: "be395298-93e0-41e3-ac46-9f9d28cbb124") : secret "webhook-server-cert" not found Feb 24 09:23:32 crc kubenswrapper[4822]: I0224 09:23:32.208071 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-8bp8c"] Feb 24 09:23:32 crc kubenswrapper[4822]: W0224 09:23:32.213037 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1da3e72_10c4_40c4_8add_a162123dc693.slice/crio-e6ecf2772c72a7308b8c9f03f5ea017ea422e63a0c79c96be3cb972462d9a90e WatchSource:0}: Error finding container e6ecf2772c72a7308b8c9f03f5ea017ea422e63a0c79c96be3cb972462d9a90e: Status 404 returned error can't find the container with id e6ecf2772c72a7308b8c9f03f5ea017ea422e63a0c79c96be3cb972462d9a90e Feb 24 09:23:33 crc kubenswrapper[4822]: I0224 09:23:33.106464 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-8bp8c" event={"ID":"d1da3e72-10c4-40c4-8add-a162123dc693","Type":"ContainerStarted","Data":"e6ecf2772c72a7308b8c9f03f5ea017ea422e63a0c79c96be3cb972462d9a90e"} Feb 24 09:23:36 crc kubenswrapper[4822]: I0224 09:23:36.147886 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-z6hzm" Feb 24 09:23:36 crc kubenswrapper[4822]: I0224 09:23:36.151821 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gqnp9" Feb 24 09:23:36 crc kubenswrapper[4822]: I0224 09:23:36.151987 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-ttdt9" Feb 24 09:23:36 crc kubenswrapper[4822]: I0224 09:23:36.153245 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-xlfx2" Feb 24 09:23:36 crc kubenswrapper[4822]: I0224 09:23:36.153344 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-7rmlj" Feb 24 09:23:36 crc kubenswrapper[4822]: I0224 09:23:36.153856 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-mslnj" Feb 24 09:23:36 crc kubenswrapper[4822]: I0224 09:23:36.156022 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-67d996989d-n6x49" Feb 24 09:23:36 crc kubenswrapper[4822]: I0224 09:23:36.157173 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-h5c46" Feb 24 09:23:36 crc kubenswrapper[4822]: I0224 09:23:36.159434 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-wpr57" Feb 24 09:23:36 crc kubenswrapper[4822]: I0224 09:23:36.160482 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-gpdd5" Feb 24 09:23:36 crc kubenswrapper[4822]: I0224 09:23:36.199735 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hzf8q" Feb 24 09:23:36 crc kubenswrapper[4822]: I0224 09:23:36.237754 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-kd8sb" Feb 24 09:23:36 crc kubenswrapper[4822]: I0224 09:23:36.260542 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-7xgt4" Feb 24 09:23:36 crc kubenswrapper[4822]: I0224 09:23:36.357650 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-vb9w9" Feb 24 09:23:39 crc kubenswrapper[4822]: I0224 09:23:39.796987 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lrhw6"] Feb 24 09:23:39 crc kubenswrapper[4822]: I0224 09:23:39.799526 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lrhw6" Feb 24 09:23:39 crc kubenswrapper[4822]: I0224 09:23:39.823289 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lrhw6"] Feb 24 09:23:39 crc kubenswrapper[4822]: I0224 09:23:39.893923 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/510fb097-cfa7-4039-a5ec-154965b9a876-utilities\") pod \"certified-operators-lrhw6\" (UID: \"510fb097-cfa7-4039-a5ec-154965b9a876\") " pod="openshift-marketplace/certified-operators-lrhw6" Feb 24 09:23:39 crc kubenswrapper[4822]: I0224 09:23:39.894027 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/510fb097-cfa7-4039-a5ec-154965b9a876-catalog-content\") pod \"certified-operators-lrhw6\" (UID: \"510fb097-cfa7-4039-a5ec-154965b9a876\") " pod="openshift-marketplace/certified-operators-lrhw6" Feb 24 09:23:39 crc kubenswrapper[4822]: I0224 09:23:39.894173 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kcc2\" (UniqueName: \"kubernetes.io/projected/510fb097-cfa7-4039-a5ec-154965b9a876-kube-api-access-9kcc2\") pod \"certified-operators-lrhw6\" (UID: \"510fb097-cfa7-4039-a5ec-154965b9a876\") " pod="openshift-marketplace/certified-operators-lrhw6" Feb 24 09:23:39 crc kubenswrapper[4822]: I0224 09:23:39.995826 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/510fb097-cfa7-4039-a5ec-154965b9a876-utilities\") pod \"certified-operators-lrhw6\" (UID: \"510fb097-cfa7-4039-a5ec-154965b9a876\") " pod="openshift-marketplace/certified-operators-lrhw6" Feb 24 09:23:39 crc kubenswrapper[4822]: I0224 09:23:39.995885 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/510fb097-cfa7-4039-a5ec-154965b9a876-catalog-content\") pod \"certified-operators-lrhw6\" (UID: \"510fb097-cfa7-4039-a5ec-154965b9a876\") " pod="openshift-marketplace/certified-operators-lrhw6" Feb 24 09:23:39 crc kubenswrapper[4822]: I0224 09:23:39.995996 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kcc2\" (UniqueName: \"kubernetes.io/projected/510fb097-cfa7-4039-a5ec-154965b9a876-kube-api-access-9kcc2\") pod \"certified-operators-lrhw6\" (UID: \"510fb097-cfa7-4039-a5ec-154965b9a876\") " pod="openshift-marketplace/certified-operators-lrhw6" Feb 24 09:23:39 crc kubenswrapper[4822]: I0224 09:23:39.996447 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/510fb097-cfa7-4039-a5ec-154965b9a876-utilities\") pod \"certified-operators-lrhw6\" (UID: \"510fb097-cfa7-4039-a5ec-154965b9a876\") " pod="openshift-marketplace/certified-operators-lrhw6" Feb 24 09:23:39 crc kubenswrapper[4822]: I0224 09:23:39.996498 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/510fb097-cfa7-4039-a5ec-154965b9a876-catalog-content\") pod \"certified-operators-lrhw6\" (UID: \"510fb097-cfa7-4039-a5ec-154965b9a876\") " pod="openshift-marketplace/certified-operators-lrhw6" Feb 24 09:23:40 crc kubenswrapper[4822]: I0224 09:23:40.020724 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kcc2\" (UniqueName: \"kubernetes.io/projected/510fb097-cfa7-4039-a5ec-154965b9a876-kube-api-access-9kcc2\") pod \"certified-operators-lrhw6\" (UID: \"510fb097-cfa7-4039-a5ec-154965b9a876\") " pod="openshift-marketplace/certified-operators-lrhw6" Feb 24 09:23:40 crc kubenswrapper[4822]: I0224 09:23:40.123296 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lrhw6" Feb 24 09:23:40 crc kubenswrapper[4822]: I0224 09:23:40.663596 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lrhw6"] Feb 24 09:23:40 crc kubenswrapper[4822]: W0224 09:23:40.678091 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod510fb097_cfa7_4039_a5ec_154965b9a876.slice/crio-937697a0aa16d2726ce9e5717a1d9ba6a8409720fd776aa6f49305d5086f041b WatchSource:0}: Error finding container 937697a0aa16d2726ce9e5717a1d9ba6a8409720fd776aa6f49305d5086f041b: Status 404 returned error can't find the container with id 937697a0aa16d2726ce9e5717a1d9ba6a8409720fd776aa6f49305d5086f041b Feb 24 09:23:41 crc kubenswrapper[4822]: I0224 09:23:41.250609 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-68ppf" event={"ID":"fadde01c-c670-4a09-b0af-35c60b5542f2","Type":"ContainerStarted","Data":"db29d931c3e213b69778fb6d75c3a0d58a17f33af5f124ac97e60dbedfc61dda"} Feb 24 09:23:41 crc kubenswrapper[4822]: I0224 09:23:41.251136 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-68ppf" Feb 24 09:23:41 crc kubenswrapper[4822]: I0224 09:23:41.253787 4822 generic.go:334] "Generic (PLEG): container finished" podID="510fb097-cfa7-4039-a5ec-154965b9a876" containerID="00bccd00b6f572b8eb9311434e377c86c48f5fc779e7843ad296d8d9d0b2263e" exitCode=0 Feb 24 09:23:41 crc kubenswrapper[4822]: I0224 09:23:41.253868 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrhw6" event={"ID":"510fb097-cfa7-4039-a5ec-154965b9a876","Type":"ContainerDied","Data":"00bccd00b6f572b8eb9311434e377c86c48f5fc779e7843ad296d8d9d0b2263e"} Feb 24 09:23:41 crc kubenswrapper[4822]: I0224 09:23:41.253954 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrhw6" event={"ID":"510fb097-cfa7-4039-a5ec-154965b9a876","Type":"ContainerStarted","Data":"937697a0aa16d2726ce9e5717a1d9ba6a8409720fd776aa6f49305d5086f041b"} Feb 24 09:23:41 crc kubenswrapper[4822]: I0224 09:23:41.255481 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wttq9" event={"ID":"fe0d5f37-87cd-4f29-97c5-9253686da873","Type":"ContainerStarted","Data":"6efa1b447682cef3e900d390744be4f419cef03255e23fb3323552aa9757d2cc"} Feb 24 09:23:41 crc kubenswrapper[4822]: I0224 09:23:41.255716 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wttq9" Feb 24 09:23:41 crc kubenswrapper[4822]: I0224 09:23:41.256777 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-mwwrn" event={"ID":"1d74d19d-7487-4df7-9086-25d448c200ac","Type":"ContainerStarted","Data":"054ef39cba27c9a8811dfe8cc628ec119dd7155fcd4d274e9a21e6adbf306413"} Feb 24 09:23:41 crc kubenswrapper[4822]: I0224 09:23:41.257433 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-mwwrn" Feb 24 09:23:41 crc kubenswrapper[4822]: I0224 09:23:41.259614 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-67bxj" event={"ID":"9f3fa61d-3144-4ae6-842d-81b2cdd5476e","Type":"ContainerStarted","Data":"75e0ccb3e309d5c3e5764686d4789e1fe3087567e5cdf9eed0ef9e98c373cec2"} Feb 24 09:23:41 crc kubenswrapper[4822]: I0224 09:23:41.260022 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-67bxj" Feb 24 09:23:41 crc kubenswrapper[4822]: I0224 09:23:41.261308 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-gg8lt" event={"ID":"f0441fa9-b0f9-4b7a-ab83-4b8262352c84","Type":"ContainerStarted","Data":"a37cda44e2a464b4d37c0c4761a79aad76ac81d2cdefdb0fa42782233da55076"} Feb 24 09:23:41 crc kubenswrapper[4822]: I0224 09:23:41.261652 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-gg8lt" Feb 24 09:23:41 crc kubenswrapper[4822]: I0224 09:23:41.263064 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jg4cd" event={"ID":"c086a8e6-c5d2-4858-ab0e-deebe6574e75","Type":"ContainerStarted","Data":"a8d01448368d70275298af1a90387047f3bf6a26bcaa7a03157931b4c40a2267"} Feb 24 09:23:41 crc kubenswrapper[4822]: I0224 09:23:41.264267 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-8bp8c" event={"ID":"d1da3e72-10c4-40c4-8add-a162123dc693","Type":"ContainerStarted","Data":"582a31c969e5e1f8adafa9c29d9617734be9951f07a3628a1368bec0f10fb377"} Feb 24 09:23:41 crc kubenswrapper[4822]: I0224 09:23:41.277417 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-8bp8c" Feb 24 09:23:41 crc kubenswrapper[4822]: I0224 09:23:41.281290 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-68ppf" podStartSLOduration=3.385340723 podStartE2EDuration="26.281271832s" podCreationTimestamp="2026-02-24 09:23:15 +0000 UTC" firstStartedPulling="2026-02-24 09:23:17.231456983 +0000 UTC m=+919.619219531" lastFinishedPulling="2026-02-24 09:23:40.127388092 +0000 UTC m=+942.515150640" observedRunningTime="2026-02-24 09:23:41.277577379 +0000 UTC m=+943.665339927" watchObservedRunningTime="2026-02-24 09:23:41.281271832 +0000 UTC m=+943.669034380" Feb 24 09:23:41 crc kubenswrapper[4822]: I0224 09:23:41.299529 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-8bp8c" podStartSLOduration=18.33185143 podStartE2EDuration="26.299514392s" podCreationTimestamp="2026-02-24 09:23:15 +0000 UTC" firstStartedPulling="2026-02-24 09:23:32.216601381 +0000 UTC m=+934.604363969" lastFinishedPulling="2026-02-24 09:23:40.184264383 +0000 UTC m=+942.572026931" observedRunningTime="2026-02-24 09:23:41.293847116 +0000 UTC m=+943.681609664" watchObservedRunningTime="2026-02-24 09:23:41.299514392 +0000 UTC m=+943.687276940" Feb 24 09:23:41 crc kubenswrapper[4822]: I0224 09:23:41.325895 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-67bxj" podStartSLOduration=3.357788496 podStartE2EDuration="26.325881226s" podCreationTimestamp="2026-02-24 09:23:15 +0000 UTC" firstStartedPulling="2026-02-24 09:23:17.249089027 +0000 UTC m=+919.636851565" lastFinishedPulling="2026-02-24 09:23:40.217181747 +0000 UTC m=+942.604944295" observedRunningTime="2026-02-24 09:23:41.318792801 +0000 UTC m=+943.706555349" watchObservedRunningTime="2026-02-24 09:23:41.325881226 +0000 UTC m=+943.713643774" Feb 24 09:23:41 crc kubenswrapper[4822]: I0224 09:23:41.352289 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jg4cd" podStartSLOduration=2.402982408 podStartE2EDuration="25.352276241s" podCreationTimestamp="2026-02-24 09:23:16 +0000 UTC" firstStartedPulling="2026-02-24 09:23:17.268007007 +0000 UTC m=+919.655769545" lastFinishedPulling="2026-02-24 09:23:40.21730083 +0000 UTC m=+942.605063378" observedRunningTime="2026-02-24 09:23:41.350467681 +0000 UTC m=+943.738230229" watchObservedRunningTime="2026-02-24 09:23:41.352276241 +0000 UTC m=+943.740038789" Feb 24 09:23:41 crc kubenswrapper[4822]: I0224 09:23:41.368744 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wttq9" podStartSLOduration=3.372718266 podStartE2EDuration="26.368726372s" podCreationTimestamp="2026-02-24 09:23:15 +0000 UTC" firstStartedPulling="2026-02-24 09:23:17.223470044 +0000 UTC m=+919.611232602" lastFinishedPulling="2026-02-24 09:23:40.21947816 +0000 UTC m=+942.607240708" observedRunningTime="2026-02-24 09:23:41.364739843 +0000 UTC m=+943.752502391" watchObservedRunningTime="2026-02-24 09:23:41.368726372 +0000 UTC m=+943.756488910" Feb 24 09:23:41 crc kubenswrapper[4822]: I0224 09:23:41.382122 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-gg8lt" podStartSLOduration=3.407609965 podStartE2EDuration="26.38210452s" podCreationTimestamp="2026-02-24 09:23:15 +0000 UTC" firstStartedPulling="2026-02-24 09:23:17.242708012 +0000 UTC m=+919.630470560" lastFinishedPulling="2026-02-24 09:23:40.217202567 +0000 UTC m=+942.604965115" observedRunningTime="2026-02-24 09:23:41.374691766 +0000 UTC m=+943.762454314" watchObservedRunningTime="2026-02-24 09:23:41.38210452 +0000 UTC m=+943.769867068" Feb 24 09:23:42 crc kubenswrapper[4822]: I0224 09:23:42.278771 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrhw6" event={"ID":"510fb097-cfa7-4039-a5ec-154965b9a876","Type":"ContainerStarted","Data":"fe3f3fe0736d9ae53336ee16a3cc0462848ff4895e68105ad6d4b827f0179fa6"} Feb 24 09:23:42 crc kubenswrapper[4822]: I0224 09:23:42.312272 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-mwwrn" podStartSLOduration=4.43093661 podStartE2EDuration="27.312247747s" podCreationTimestamp="2026-02-24 09:23:15 +0000 UTC" firstStartedPulling="2026-02-24 09:23:17.245866379 +0000 UTC m=+919.633628917" lastFinishedPulling="2026-02-24 09:23:40.127177486 +0000 UTC m=+942.514940054" observedRunningTime="2026-02-24 09:23:41.392133115 +0000 UTC m=+943.779895663" watchObservedRunningTime="2026-02-24 09:23:42.312247747 +0000 UTC m=+944.700010325" Feb 24 09:23:42 crc kubenswrapper[4822]: I0224 09:23:42.582328 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nwqnh"] Feb 24 09:23:42 crc kubenswrapper[4822]: I0224 09:23:42.584667 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nwqnh" Feb 24 09:23:42 crc kubenswrapper[4822]: I0224 09:23:42.608004 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nwqnh"] Feb 24 09:23:42 crc kubenswrapper[4822]: I0224 09:23:42.731462 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6tj5\" (UniqueName: \"kubernetes.io/projected/d1305e7b-ad6b-4155-be7c-75557d0784ea-kube-api-access-f6tj5\") pod \"community-operators-nwqnh\" (UID: \"d1305e7b-ad6b-4155-be7c-75557d0784ea\") " pod="openshift-marketplace/community-operators-nwqnh" Feb 24 09:23:42 crc kubenswrapper[4822]: I0224 09:23:42.731559 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1305e7b-ad6b-4155-be7c-75557d0784ea-utilities\") pod \"community-operators-nwqnh\" (UID: \"d1305e7b-ad6b-4155-be7c-75557d0784ea\") " pod="openshift-marketplace/community-operators-nwqnh" Feb 24 09:23:42 crc kubenswrapper[4822]: I0224 09:23:42.731676 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1305e7b-ad6b-4155-be7c-75557d0784ea-catalog-content\") pod \"community-operators-nwqnh\" (UID: \"d1305e7b-ad6b-4155-be7c-75557d0784ea\") " pod="openshift-marketplace/community-operators-nwqnh" Feb 24 09:23:42 crc kubenswrapper[4822]: I0224 09:23:42.833444 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1305e7b-ad6b-4155-be7c-75557d0784ea-catalog-content\") pod \"community-operators-nwqnh\" (UID: \"d1305e7b-ad6b-4155-be7c-75557d0784ea\") " pod="openshift-marketplace/community-operators-nwqnh" Feb 24 09:23:42 crc kubenswrapper[4822]: I0224 09:23:42.833529 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6tj5\" (UniqueName: \"kubernetes.io/projected/d1305e7b-ad6b-4155-be7c-75557d0784ea-kube-api-access-f6tj5\") pod \"community-operators-nwqnh\" (UID: \"d1305e7b-ad6b-4155-be7c-75557d0784ea\") " pod="openshift-marketplace/community-operators-nwqnh" Feb 24 09:23:42 crc kubenswrapper[4822]: I0224 09:23:42.833817 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1305e7b-ad6b-4155-be7c-75557d0784ea-utilities\") pod \"community-operators-nwqnh\" (UID: \"d1305e7b-ad6b-4155-be7c-75557d0784ea\") " pod="openshift-marketplace/community-operators-nwqnh" Feb 24 09:23:42 crc kubenswrapper[4822]: I0224 09:23:42.834212 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1305e7b-ad6b-4155-be7c-75557d0784ea-utilities\") pod \"community-operators-nwqnh\" (UID: \"d1305e7b-ad6b-4155-be7c-75557d0784ea\") " pod="openshift-marketplace/community-operators-nwqnh" Feb 24 09:23:42 crc kubenswrapper[4822]: I0224 09:23:42.834284 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1305e7b-ad6b-4155-be7c-75557d0784ea-catalog-content\") pod \"community-operators-nwqnh\" (UID: \"d1305e7b-ad6b-4155-be7c-75557d0784ea\") " pod="openshift-marketplace/community-operators-nwqnh" Feb 24 09:23:42 crc kubenswrapper[4822]: I0224 09:23:42.856916 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6tj5\" (UniqueName: \"kubernetes.io/projected/d1305e7b-ad6b-4155-be7c-75557d0784ea-kube-api-access-f6tj5\") pod \"community-operators-nwqnh\" (UID: \"d1305e7b-ad6b-4155-be7c-75557d0784ea\") " pod="openshift-marketplace/community-operators-nwqnh" Feb 24 09:23:42 crc kubenswrapper[4822]: I0224 09:23:42.905659 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nwqnh" Feb 24 09:23:43 crc kubenswrapper[4822]: I0224 09:23:43.283401 4822 generic.go:334] "Generic (PLEG): container finished" podID="510fb097-cfa7-4039-a5ec-154965b9a876" containerID="fe3f3fe0736d9ae53336ee16a3cc0462848ff4895e68105ad6d4b827f0179fa6" exitCode=0 Feb 24 09:23:43 crc kubenswrapper[4822]: I0224 09:23:43.283477 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrhw6" event={"ID":"510fb097-cfa7-4039-a5ec-154965b9a876","Type":"ContainerDied","Data":"fe3f3fe0736d9ae53336ee16a3cc0462848ff4895e68105ad6d4b827f0179fa6"} Feb 24 09:23:43 crc kubenswrapper[4822]: I0224 09:23:43.380455 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nwqnh"] Feb 24 09:23:44 crc kubenswrapper[4822]: I0224 09:23:44.293825 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrhw6" event={"ID":"510fb097-cfa7-4039-a5ec-154965b9a876","Type":"ContainerStarted","Data":"a972d001e8e06726157bb02e75c1dd7d22e6db8b6a8e1cd10a9562bc8aef864a"} Feb 24 09:23:44 crc kubenswrapper[4822]: I0224 09:23:44.295729 4822 generic.go:334] "Generic (PLEG): container finished" podID="d1305e7b-ad6b-4155-be7c-75557d0784ea" containerID="845259b185e49dbcbfc48fd18fb76ac2f27f2386cf41f21af86dc066d4dbd8f7" exitCode=0 Feb 24 09:23:44 crc kubenswrapper[4822]: I0224 09:23:44.295794 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nwqnh" event={"ID":"d1305e7b-ad6b-4155-be7c-75557d0784ea","Type":"ContainerDied","Data":"845259b185e49dbcbfc48fd18fb76ac2f27f2386cf41f21af86dc066d4dbd8f7"} Feb 24 09:23:44 crc kubenswrapper[4822]: I0224 09:23:44.295833 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nwqnh" event={"ID":"d1305e7b-ad6b-4155-be7c-75557d0784ea","Type":"ContainerStarted","Data":"e210ac954e64cec2370362ed990dffb4ddc7ece44eb5674f604884b787b72c89"} Feb 24 09:23:44 crc kubenswrapper[4822]: I0224 09:23:44.325262 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lrhw6" podStartSLOduration=2.870912498 podStartE2EDuration="5.325247503s" podCreationTimestamp="2026-02-24 09:23:39 +0000 UTC" firstStartedPulling="2026-02-24 09:23:41.255163014 +0000 UTC m=+943.642925562" lastFinishedPulling="2026-02-24 09:23:43.709498009 +0000 UTC m=+946.097260567" observedRunningTime="2026-02-24 09:23:44.323332951 +0000 UTC m=+946.711095539" watchObservedRunningTime="2026-02-24 09:23:44.325247503 +0000 UTC m=+946.713010051" Feb 24 09:23:45 crc kubenswrapper[4822]: I0224 09:23:45.677067 4822 patch_prober.go:28] interesting pod/machine-config-daemon-qd752 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:23:45 crc kubenswrapper[4822]: I0224 09:23:45.677413 4822 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:23:46 crc kubenswrapper[4822]: I0224 09:23:46.156087 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-68ppf" Feb 24 09:23:46 crc kubenswrapper[4822]: I0224 09:23:46.157233 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wttq9" Feb 24 09:23:46 crc kubenswrapper[4822]: I0224 09:23:46.181403 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-67bxj" Feb 24 09:23:46 crc kubenswrapper[4822]: I0224 09:23:46.290322 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-mwwrn" Feb 24 09:23:46 crc kubenswrapper[4822]: I0224 09:23:46.321247 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-gg8lt" Feb 24 09:23:47 crc kubenswrapper[4822]: I0224 09:23:47.809803 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/884fa763-5a67-4fd5-a578-7c6d81245cb0-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpzxhl\" (UID: \"884fa763-5a67-4fd5-a578-7c6d81245cb0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpzxhl" Feb 24 09:23:47 crc kubenswrapper[4822]: I0224 09:23:47.820162 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/884fa763-5a67-4fd5-a578-7c6d81245cb0-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpzxhl\" (UID: \"884fa763-5a67-4fd5-a578-7c6d81245cb0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpzxhl" Feb 24 09:23:47 crc kubenswrapper[4822]: I0224 09:23:47.911740 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-metrics-certs\") pod \"openstack-operator-controller-manager-5d9b665c78-8n22w\" (UID: \"be395298-93e0-41e3-ac46-9f9d28cbb124\") " pod="openstack-operators/openstack-operator-controller-manager-5d9b665c78-8n22w" Feb 24 09:23:47 crc kubenswrapper[4822]: I0224 09:23:47.911834 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-webhook-certs\") pod \"openstack-operator-controller-manager-5d9b665c78-8n22w\" (UID: \"be395298-93e0-41e3-ac46-9f9d28cbb124\") " pod="openstack-operators/openstack-operator-controller-manager-5d9b665c78-8n22w" Feb 24 09:23:47 crc kubenswrapper[4822]: I0224 09:23:47.917282 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-metrics-certs\") pod \"openstack-operator-controller-manager-5d9b665c78-8n22w\" (UID: \"be395298-93e0-41e3-ac46-9f9d28cbb124\") " pod="openstack-operators/openstack-operator-controller-manager-5d9b665c78-8n22w" Feb 24 09:23:47 crc kubenswrapper[4822]: I0224 09:23:47.920745 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/be395298-93e0-41e3-ac46-9f9d28cbb124-webhook-certs\") pod \"openstack-operator-controller-manager-5d9b665c78-8n22w\" (UID: \"be395298-93e0-41e3-ac46-9f9d28cbb124\") " pod="openstack-operators/openstack-operator-controller-manager-5d9b665c78-8n22w" Feb 24 09:23:47 crc kubenswrapper[4822]: I0224 09:23:47.998834 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpzxhl" Feb 24 09:23:48 crc kubenswrapper[4822]: I0224 09:23:48.153040 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5d9b665c78-8n22w" Feb 24 09:23:48 crc kubenswrapper[4822]: I0224 09:23:48.336115 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpzxhl"] Feb 24 09:23:48 crc kubenswrapper[4822]: I0224 09:23:48.380528 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-whn4k"] Feb 24 09:23:48 crc kubenswrapper[4822]: I0224 09:23:48.385462 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-whn4k" Feb 24 09:23:48 crc kubenswrapper[4822]: I0224 09:23:48.396500 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-whn4k"] Feb 24 09:23:48 crc kubenswrapper[4822]: I0224 09:23:48.521347 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57f0fd57-b847-41dc-a2c4-1df82e246c65-utilities\") pod \"redhat-marketplace-whn4k\" (UID: \"57f0fd57-b847-41dc-a2c4-1df82e246c65\") " pod="openshift-marketplace/redhat-marketplace-whn4k" Feb 24 09:23:48 crc kubenswrapper[4822]: I0224 09:23:48.521459 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57f0fd57-b847-41dc-a2c4-1df82e246c65-catalog-content\") pod \"redhat-marketplace-whn4k\" (UID: \"57f0fd57-b847-41dc-a2c4-1df82e246c65\") " pod="openshift-marketplace/redhat-marketplace-whn4k" Feb 24 09:23:48 crc kubenswrapper[4822]: I0224 09:23:48.521649 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwhgw\" (UniqueName: \"kubernetes.io/projected/57f0fd57-b847-41dc-a2c4-1df82e246c65-kube-api-access-vwhgw\") pod \"redhat-marketplace-whn4k\" (UID: \"57f0fd57-b847-41dc-a2c4-1df82e246c65\") " pod="openshift-marketplace/redhat-marketplace-whn4k" Feb 24 09:23:48 crc kubenswrapper[4822]: I0224 09:23:48.623056 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57f0fd57-b847-41dc-a2c4-1df82e246c65-utilities\") pod \"redhat-marketplace-whn4k\" (UID: \"57f0fd57-b847-41dc-a2c4-1df82e246c65\") " pod="openshift-marketplace/redhat-marketplace-whn4k" Feb 24 09:23:48 crc kubenswrapper[4822]: I0224 09:23:48.623122 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57f0fd57-b847-41dc-a2c4-1df82e246c65-catalog-content\") pod \"redhat-marketplace-whn4k\" (UID: \"57f0fd57-b847-41dc-a2c4-1df82e246c65\") " pod="openshift-marketplace/redhat-marketplace-whn4k" Feb 24 09:23:48 crc kubenswrapper[4822]: I0224 09:23:48.623185 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwhgw\" (UniqueName: \"kubernetes.io/projected/57f0fd57-b847-41dc-a2c4-1df82e246c65-kube-api-access-vwhgw\") pod \"redhat-marketplace-whn4k\" (UID: \"57f0fd57-b847-41dc-a2c4-1df82e246c65\") " pod="openshift-marketplace/redhat-marketplace-whn4k" Feb 24 09:23:48 crc kubenswrapper[4822]: I0224 09:23:48.624112 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57f0fd57-b847-41dc-a2c4-1df82e246c65-utilities\") pod \"redhat-marketplace-whn4k\" (UID: \"57f0fd57-b847-41dc-a2c4-1df82e246c65\") " pod="openshift-marketplace/redhat-marketplace-whn4k" Feb 24 09:23:48 crc kubenswrapper[4822]: I0224 09:23:48.624168 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57f0fd57-b847-41dc-a2c4-1df82e246c65-catalog-content\") pod \"redhat-marketplace-whn4k\" (UID: \"57f0fd57-b847-41dc-a2c4-1df82e246c65\") " pod="openshift-marketplace/redhat-marketplace-whn4k" Feb 24 09:23:48 crc kubenswrapper[4822]: I0224 09:23:48.647182 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwhgw\" (UniqueName: \"kubernetes.io/projected/57f0fd57-b847-41dc-a2c4-1df82e246c65-kube-api-access-vwhgw\") pod \"redhat-marketplace-whn4k\" (UID: \"57f0fd57-b847-41dc-a2c4-1df82e246c65\") " pod="openshift-marketplace/redhat-marketplace-whn4k" Feb 24 09:23:48 crc kubenswrapper[4822]: I0224 09:23:48.697044 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5d9b665c78-8n22w"] Feb 24 09:23:48 crc kubenswrapper[4822]: W0224 09:23:48.709519 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe395298_93e0_41e3_ac46_9f9d28cbb124.slice/crio-f71676335347f52ba8c8afdfe3156e7f6f21b4e4623f94c6ce4660ec810c60ea WatchSource:0}: Error finding container f71676335347f52ba8c8afdfe3156e7f6f21b4e4623f94c6ce4660ec810c60ea: Status 404 returned error can't find the container with id f71676335347f52ba8c8afdfe3156e7f6f21b4e4623f94c6ce4660ec810c60ea Feb 24 09:23:48 crc kubenswrapper[4822]: I0224 09:23:48.712087 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-whn4k" Feb 24 09:23:49 crc kubenswrapper[4822]: I0224 09:23:49.184277 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-whn4k"] Feb 24 09:23:49 crc kubenswrapper[4822]: W0224 09:23:49.192377 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57f0fd57_b847_41dc_a2c4_1df82e246c65.slice/crio-04deed5d3208e3f31f76386a1386b2d4e9400a099713ef05efa21fa8bfbb59c7 WatchSource:0}: Error finding container 04deed5d3208e3f31f76386a1386b2d4e9400a099713ef05efa21fa8bfbb59c7: Status 404 returned error can't find the container with id 04deed5d3208e3f31f76386a1386b2d4e9400a099713ef05efa21fa8bfbb59c7 Feb 24 09:23:49 crc kubenswrapper[4822]: I0224 09:23:49.338984 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpzxhl" event={"ID":"884fa763-5a67-4fd5-a578-7c6d81245cb0","Type":"ContainerStarted","Data":"6cb6b6e6edf412d7614f26f1526e330093add90efba6f0b3bccf967205421dd9"} Feb 24 09:23:49 crc kubenswrapper[4822]: I0224 09:23:49.340268 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5d9b665c78-8n22w" event={"ID":"be395298-93e0-41e3-ac46-9f9d28cbb124","Type":"ContainerStarted","Data":"f71676335347f52ba8c8afdfe3156e7f6f21b4e4623f94c6ce4660ec810c60ea"} Feb 24 09:23:49 crc kubenswrapper[4822]: I0224 09:23:49.341339 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whn4k" event={"ID":"57f0fd57-b847-41dc-a2c4-1df82e246c65","Type":"ContainerStarted","Data":"04deed5d3208e3f31f76386a1386b2d4e9400a099713ef05efa21fa8bfbb59c7"} Feb 24 09:23:50 crc kubenswrapper[4822]: I0224 09:23:50.123644 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lrhw6" Feb 24 09:23:50 crc kubenswrapper[4822]: I0224 09:23:50.123792 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lrhw6" Feb 24 09:23:50 crc kubenswrapper[4822]: I0224 09:23:50.204878 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lrhw6" Feb 24 09:23:50 crc kubenswrapper[4822]: I0224 09:23:50.428475 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lrhw6" Feb 24 09:23:51 crc kubenswrapper[4822]: I0224 09:23:51.572849 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lrhw6"] Feb 24 09:23:51 crc kubenswrapper[4822]: I0224 09:23:51.707320 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-8bp8c" Feb 24 09:23:52 crc kubenswrapper[4822]: I0224 09:23:52.384607 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lrhw6" podUID="510fb097-cfa7-4039-a5ec-154965b9a876" containerName="registry-server" containerID="cri-o://a972d001e8e06726157bb02e75c1dd7d22e6db8b6a8e1cd10a9562bc8aef864a" gracePeriod=2 Feb 24 09:23:53 crc kubenswrapper[4822]: I0224 09:23:53.399346 4822 generic.go:334] "Generic (PLEG): container finished" podID="510fb097-cfa7-4039-a5ec-154965b9a876" containerID="a972d001e8e06726157bb02e75c1dd7d22e6db8b6a8e1cd10a9562bc8aef864a" exitCode=0 Feb 24 09:23:53 crc kubenswrapper[4822]: I0224 09:23:53.399416 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrhw6" event={"ID":"510fb097-cfa7-4039-a5ec-154965b9a876","Type":"ContainerDied","Data":"a972d001e8e06726157bb02e75c1dd7d22e6db8b6a8e1cd10a9562bc8aef864a"} Feb 24 09:23:53 crc kubenswrapper[4822]: I0224 09:23:53.956752 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lrhw6" Feb 24 09:23:54 crc kubenswrapper[4822]: I0224 09:23:54.130224 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/510fb097-cfa7-4039-a5ec-154965b9a876-catalog-content\") pod \"510fb097-cfa7-4039-a5ec-154965b9a876\" (UID: \"510fb097-cfa7-4039-a5ec-154965b9a876\") " Feb 24 09:23:54 crc kubenswrapper[4822]: I0224 09:23:54.130460 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kcc2\" (UniqueName: \"kubernetes.io/projected/510fb097-cfa7-4039-a5ec-154965b9a876-kube-api-access-9kcc2\") pod \"510fb097-cfa7-4039-a5ec-154965b9a876\" (UID: \"510fb097-cfa7-4039-a5ec-154965b9a876\") " Feb 24 09:23:54 crc kubenswrapper[4822]: I0224 09:23:54.130484 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/510fb097-cfa7-4039-a5ec-154965b9a876-utilities\") pod \"510fb097-cfa7-4039-a5ec-154965b9a876\" (UID: \"510fb097-cfa7-4039-a5ec-154965b9a876\") " Feb 24 09:23:54 crc kubenswrapper[4822]: I0224 09:23:54.131348 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/510fb097-cfa7-4039-a5ec-154965b9a876-utilities" (OuterVolumeSpecName: "utilities") pod "510fb097-cfa7-4039-a5ec-154965b9a876" (UID: "510fb097-cfa7-4039-a5ec-154965b9a876"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:23:54 crc kubenswrapper[4822]: I0224 09:23:54.136329 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/510fb097-cfa7-4039-a5ec-154965b9a876-kube-api-access-9kcc2" (OuterVolumeSpecName: "kube-api-access-9kcc2") pod "510fb097-cfa7-4039-a5ec-154965b9a876" (UID: "510fb097-cfa7-4039-a5ec-154965b9a876"). InnerVolumeSpecName "kube-api-access-9kcc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:23:54 crc kubenswrapper[4822]: I0224 09:23:54.232177 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kcc2\" (UniqueName: \"kubernetes.io/projected/510fb097-cfa7-4039-a5ec-154965b9a876-kube-api-access-9kcc2\") on node \"crc\" DevicePath \"\"" Feb 24 09:23:54 crc kubenswrapper[4822]: I0224 09:23:54.232210 4822 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/510fb097-cfa7-4039-a5ec-154965b9a876-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:23:54 crc kubenswrapper[4822]: I0224 09:23:54.417151 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrhw6" event={"ID":"510fb097-cfa7-4039-a5ec-154965b9a876","Type":"ContainerDied","Data":"937697a0aa16d2726ce9e5717a1d9ba6a8409720fd776aa6f49305d5086f041b"} Feb 24 09:23:54 crc kubenswrapper[4822]: I0224 09:23:54.417162 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lrhw6" Feb 24 09:23:54 crc kubenswrapper[4822]: I0224 09:23:54.417257 4822 scope.go:117] "RemoveContainer" containerID="a972d001e8e06726157bb02e75c1dd7d22e6db8b6a8e1cd10a9562bc8aef864a" Feb 24 09:23:54 crc kubenswrapper[4822]: I0224 09:23:54.420083 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5d9b665c78-8n22w" event={"ID":"be395298-93e0-41e3-ac46-9f9d28cbb124","Type":"ContainerStarted","Data":"6c5361411ce2b2aadc9f884e9770203368509e02018dfdd31bbf6dcc0dd3b54b"} Feb 24 09:23:54 crc kubenswrapper[4822]: I0224 09:23:54.420218 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5d9b665c78-8n22w" Feb 24 09:23:54 crc kubenswrapper[4822]: I0224 09:23:54.422382 4822 generic.go:334] "Generic (PLEG): container finished" podID="57f0fd57-b847-41dc-a2c4-1df82e246c65" containerID="d00bd66ccdb3c7ad56e6987069b4ce95d927cb815ccb231af5c38e41191325af" exitCode=0 Feb 24 09:23:54 crc kubenswrapper[4822]: I0224 09:23:54.422418 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whn4k" event={"ID":"57f0fd57-b847-41dc-a2c4-1df82e246c65","Type":"ContainerDied","Data":"d00bd66ccdb3c7ad56e6987069b4ce95d927cb815ccb231af5c38e41191325af"} Feb 24 09:23:54 crc kubenswrapper[4822]: I0224 09:23:54.451023 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5d9b665c78-8n22w" podStartSLOduration=39.450999697 podStartE2EDuration="39.450999697s" podCreationTimestamp="2026-02-24 09:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:23:54.442661628 +0000 UTC m=+956.830424166" watchObservedRunningTime="2026-02-24 09:23:54.450999697 +0000 UTC m=+956.838762265" Feb 24 09:23:54 crc kubenswrapper[4822]: I0224 09:23:54.481571 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/510fb097-cfa7-4039-a5ec-154965b9a876-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "510fb097-cfa7-4039-a5ec-154965b9a876" (UID: "510fb097-cfa7-4039-a5ec-154965b9a876"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:23:54 crc kubenswrapper[4822]: I0224 09:23:54.494002 4822 scope.go:117] "RemoveContainer" containerID="fe3f3fe0736d9ae53336ee16a3cc0462848ff4895e68105ad6d4b827f0179fa6" Feb 24 09:23:54 crc kubenswrapper[4822]: I0224 09:23:54.536532 4822 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/510fb097-cfa7-4039-a5ec-154965b9a876-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:23:54 crc kubenswrapper[4822]: I0224 09:23:54.756711 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lrhw6"] Feb 24 09:23:54 crc kubenswrapper[4822]: I0224 09:23:54.761302 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lrhw6"] Feb 24 09:23:54 crc kubenswrapper[4822]: I0224 09:23:54.975714 4822 scope.go:117] "RemoveContainer" containerID="00bccd00b6f572b8eb9311434e377c86c48f5fc779e7843ad296d8d9d0b2263e" Feb 24 09:23:55 crc kubenswrapper[4822]: I0224 09:23:55.430045 4822 generic.go:334] "Generic (PLEG): container finished" podID="d1305e7b-ad6b-4155-be7c-75557d0784ea" containerID="379deb56eefed963b3d53a22a7239d36c16be81177f8744d79040e642795533c" exitCode=0 Feb 24 09:23:55 crc kubenswrapper[4822]: I0224 09:23:55.430109 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nwqnh" event={"ID":"d1305e7b-ad6b-4155-be7c-75557d0784ea","Type":"ContainerDied","Data":"379deb56eefed963b3d53a22a7239d36c16be81177f8744d79040e642795533c"} Feb 24 09:23:55 crc kubenswrapper[4822]: I0224 09:23:55.431999 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpzxhl" event={"ID":"884fa763-5a67-4fd5-a578-7c6d81245cb0","Type":"ContainerStarted","Data":"a3bf145e779cbd0c514e44fdba0e5b5d4a704e45d2c30b4cefa4c3ef8d62a847"} Feb 24 09:23:55 crc kubenswrapper[4822]: I0224 09:23:55.432149 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpzxhl" Feb 24 09:23:55 crc kubenswrapper[4822]: I0224 09:23:55.487300 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpzxhl" podStartSLOduration=33.808149953 podStartE2EDuration="40.487278788s" podCreationTimestamp="2026-02-24 09:23:15 +0000 UTC" firstStartedPulling="2026-02-24 09:23:48.338818347 +0000 UTC m=+950.726580895" lastFinishedPulling="2026-02-24 09:23:55.017947182 +0000 UTC m=+957.405709730" observedRunningTime="2026-02-24 09:23:55.486027993 +0000 UTC m=+957.873790551" watchObservedRunningTime="2026-02-24 09:23:55.487278788 +0000 UTC m=+957.875041346" Feb 24 09:23:56 crc kubenswrapper[4822]: I0224 09:23:56.347443 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="510fb097-cfa7-4039-a5ec-154965b9a876" path="/var/lib/kubelet/pods/510fb097-cfa7-4039-a5ec-154965b9a876/volumes" Feb 24 09:23:56 crc kubenswrapper[4822]: I0224 09:23:56.440502 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whn4k" event={"ID":"57f0fd57-b847-41dc-a2c4-1df82e246c65","Type":"ContainerStarted","Data":"bf1d5903f39fc46c60df06372a4c06259b4c218a2c2f9cc4bda2a98d7d7100d8"} Feb 24 09:23:57 crc kubenswrapper[4822]: I0224 09:23:57.454437 4822 generic.go:334] "Generic (PLEG): container finished" podID="57f0fd57-b847-41dc-a2c4-1df82e246c65" containerID="bf1d5903f39fc46c60df06372a4c06259b4c218a2c2f9cc4bda2a98d7d7100d8" exitCode=0 Feb 24 09:23:57 crc kubenswrapper[4822]: I0224 09:23:57.454511 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whn4k" event={"ID":"57f0fd57-b847-41dc-a2c4-1df82e246c65","Type":"ContainerDied","Data":"bf1d5903f39fc46c60df06372a4c06259b4c218a2c2f9cc4bda2a98d7d7100d8"} Feb 24 09:23:57 crc kubenswrapper[4822]: I0224 09:23:57.459566 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nwqnh" event={"ID":"d1305e7b-ad6b-4155-be7c-75557d0784ea","Type":"ContainerStarted","Data":"25fd59b24c5477f8d236bd4840ce6a52f5c364279b60e8e85f2fafac55343008"} Feb 24 09:23:57 crc kubenswrapper[4822]: I0224 09:23:57.505679 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nwqnh" podStartSLOduration=2.987208468 podStartE2EDuration="15.505651182s" podCreationTimestamp="2026-02-24 09:23:42 +0000 UTC" firstStartedPulling="2026-02-24 09:23:44.297753429 +0000 UTC m=+946.685516017" lastFinishedPulling="2026-02-24 09:23:56.816196143 +0000 UTC m=+959.203958731" observedRunningTime="2026-02-24 09:23:57.499691168 +0000 UTC m=+959.887453746" watchObservedRunningTime="2026-02-24 09:23:57.505651182 +0000 UTC m=+959.893413770" Feb 24 09:23:58 crc kubenswrapper[4822]: I0224 09:23:58.161180 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5d9b665c78-8n22w" Feb 24 09:23:58 crc kubenswrapper[4822]: I0224 09:23:58.468585 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whn4k" event={"ID":"57f0fd57-b847-41dc-a2c4-1df82e246c65","Type":"ContainerStarted","Data":"c9c93b7189ac21f4f9ab8313d1bed1e4bdef58e1a1e64eae2d1c79447c00e7e7"} Feb 24 09:23:58 crc kubenswrapper[4822]: I0224 09:23:58.491723 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-whn4k" podStartSLOduration=7.112656542 podStartE2EDuration="10.491706014s" podCreationTimestamp="2026-02-24 09:23:48 +0000 UTC" firstStartedPulling="2026-02-24 09:23:54.492598379 +0000 UTC m=+956.880360927" lastFinishedPulling="2026-02-24 09:23:57.871647841 +0000 UTC m=+960.259410399" observedRunningTime="2026-02-24 09:23:58.487741495 +0000 UTC m=+960.875504073" watchObservedRunningTime="2026-02-24 09:23:58.491706014 +0000 UTC m=+960.879468572" Feb 24 09:23:58 crc kubenswrapper[4822]: I0224 09:23:58.712523 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-whn4k" Feb 24 09:23:58 crc kubenswrapper[4822]: I0224 09:23:58.712884 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-whn4k" Feb 24 09:23:59 crc kubenswrapper[4822]: I0224 09:23:59.753261 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-whn4k" podUID="57f0fd57-b847-41dc-a2c4-1df82e246c65" containerName="registry-server" probeResult="failure" output=< Feb 24 09:23:59 crc kubenswrapper[4822]: timeout: failed to connect service ":50051" within 1s Feb 24 09:23:59 crc kubenswrapper[4822]: > Feb 24 09:24:02 crc kubenswrapper[4822]: I0224 09:24:02.906067 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nwqnh" Feb 24 09:24:02 crc kubenswrapper[4822]: I0224 09:24:02.906384 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nwqnh" Feb 24 09:24:02 crc kubenswrapper[4822]: I0224 09:24:02.973107 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nwqnh" Feb 24 09:24:03 crc kubenswrapper[4822]: I0224 09:24:03.594689 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nwqnh" Feb 24 09:24:03 crc kubenswrapper[4822]: I0224 09:24:03.667329 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nwqnh"] Feb 24 09:24:05 crc kubenswrapper[4822]: I0224 09:24:05.525396 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nwqnh" podUID="d1305e7b-ad6b-4155-be7c-75557d0784ea" containerName="registry-server" containerID="cri-o://25fd59b24c5477f8d236bd4840ce6a52f5c364279b60e8e85f2fafac55343008" gracePeriod=2 Feb 24 09:24:05 crc kubenswrapper[4822]: I0224 09:24:05.626392 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w2psg"] Feb 24 09:24:05 crc kubenswrapper[4822]: E0224 09:24:05.626776 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="510fb097-cfa7-4039-a5ec-154965b9a876" containerName="registry-server" Feb 24 09:24:05 crc kubenswrapper[4822]: I0224 09:24:05.626791 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="510fb097-cfa7-4039-a5ec-154965b9a876" containerName="registry-server" Feb 24 09:24:05 crc kubenswrapper[4822]: E0224 09:24:05.626812 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="510fb097-cfa7-4039-a5ec-154965b9a876" containerName="extract-content" Feb 24 09:24:05 crc kubenswrapper[4822]: I0224 09:24:05.626820 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="510fb097-cfa7-4039-a5ec-154965b9a876" containerName="extract-content" Feb 24 09:24:05 crc kubenswrapper[4822]: E0224 09:24:05.626842 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="510fb097-cfa7-4039-a5ec-154965b9a876" containerName="extract-utilities" Feb 24 09:24:05 crc kubenswrapper[4822]: I0224 09:24:05.626851 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="510fb097-cfa7-4039-a5ec-154965b9a876" containerName="extract-utilities" Feb 24 09:24:05 crc kubenswrapper[4822]: I0224 09:24:05.627030 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="510fb097-cfa7-4039-a5ec-154965b9a876" containerName="registry-server" Feb 24 09:24:05 crc kubenswrapper[4822]: I0224 09:24:05.628297 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w2psg" Feb 24 09:24:05 crc kubenswrapper[4822]: I0224 09:24:05.648444 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w2psg"] Feb 24 09:24:05 crc kubenswrapper[4822]: I0224 09:24:05.709694 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e353a9a9-56ef-4144-bad9-72de87e2fe57-utilities\") pod \"redhat-operators-w2psg\" (UID: \"e353a9a9-56ef-4144-bad9-72de87e2fe57\") " pod="openshift-marketplace/redhat-operators-w2psg" Feb 24 09:24:05 crc kubenswrapper[4822]: I0224 09:24:05.710057 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snj4h\" (UniqueName: \"kubernetes.io/projected/e353a9a9-56ef-4144-bad9-72de87e2fe57-kube-api-access-snj4h\") pod \"redhat-operators-w2psg\" (UID: \"e353a9a9-56ef-4144-bad9-72de87e2fe57\") " pod="openshift-marketplace/redhat-operators-w2psg" Feb 24 09:24:05 crc kubenswrapper[4822]: I0224 09:24:05.710140 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e353a9a9-56ef-4144-bad9-72de87e2fe57-catalog-content\") pod \"redhat-operators-w2psg\" (UID: \"e353a9a9-56ef-4144-bad9-72de87e2fe57\") " pod="openshift-marketplace/redhat-operators-w2psg" Feb 24 09:24:05 crc kubenswrapper[4822]: I0224 09:24:05.811733 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e353a9a9-56ef-4144-bad9-72de87e2fe57-catalog-content\") pod \"redhat-operators-w2psg\" (UID: \"e353a9a9-56ef-4144-bad9-72de87e2fe57\") " pod="openshift-marketplace/redhat-operators-w2psg" Feb 24 09:24:05 crc kubenswrapper[4822]: I0224 09:24:05.811845 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e353a9a9-56ef-4144-bad9-72de87e2fe57-utilities\") pod \"redhat-operators-w2psg\" (UID: \"e353a9a9-56ef-4144-bad9-72de87e2fe57\") " pod="openshift-marketplace/redhat-operators-w2psg" Feb 24 09:24:05 crc kubenswrapper[4822]: I0224 09:24:05.811870 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snj4h\" (UniqueName: \"kubernetes.io/projected/e353a9a9-56ef-4144-bad9-72de87e2fe57-kube-api-access-snj4h\") pod \"redhat-operators-w2psg\" (UID: \"e353a9a9-56ef-4144-bad9-72de87e2fe57\") " pod="openshift-marketplace/redhat-operators-w2psg" Feb 24 09:24:05 crc kubenswrapper[4822]: I0224 09:24:05.812454 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e353a9a9-56ef-4144-bad9-72de87e2fe57-catalog-content\") pod \"redhat-operators-w2psg\" (UID: \"e353a9a9-56ef-4144-bad9-72de87e2fe57\") " pod="openshift-marketplace/redhat-operators-w2psg" Feb 24 09:24:05 crc kubenswrapper[4822]: I0224 09:24:05.812595 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e353a9a9-56ef-4144-bad9-72de87e2fe57-utilities\") pod \"redhat-operators-w2psg\" (UID: \"e353a9a9-56ef-4144-bad9-72de87e2fe57\") " pod="openshift-marketplace/redhat-operators-w2psg" Feb 24 09:24:05 crc kubenswrapper[4822]: I0224 09:24:05.845315 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snj4h\" (UniqueName: \"kubernetes.io/projected/e353a9a9-56ef-4144-bad9-72de87e2fe57-kube-api-access-snj4h\") pod \"redhat-operators-w2psg\" (UID: \"e353a9a9-56ef-4144-bad9-72de87e2fe57\") " pod="openshift-marketplace/redhat-operators-w2psg" Feb 24 09:24:05 crc kubenswrapper[4822]: I0224 09:24:05.973727 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nwqnh" Feb 24 09:24:05 crc kubenswrapper[4822]: I0224 09:24:05.999772 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w2psg" Feb 24 09:24:06 crc kubenswrapper[4822]: I0224 09:24:06.059376 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6tj5\" (UniqueName: \"kubernetes.io/projected/d1305e7b-ad6b-4155-be7c-75557d0784ea-kube-api-access-f6tj5\") pod \"d1305e7b-ad6b-4155-be7c-75557d0784ea\" (UID: \"d1305e7b-ad6b-4155-be7c-75557d0784ea\") " Feb 24 09:24:06 crc kubenswrapper[4822]: I0224 09:24:06.059439 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1305e7b-ad6b-4155-be7c-75557d0784ea-utilities\") pod \"d1305e7b-ad6b-4155-be7c-75557d0784ea\" (UID: \"d1305e7b-ad6b-4155-be7c-75557d0784ea\") " Feb 24 09:24:06 crc kubenswrapper[4822]: I0224 09:24:06.060816 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1305e7b-ad6b-4155-be7c-75557d0784ea-utilities" (OuterVolumeSpecName: "utilities") pod "d1305e7b-ad6b-4155-be7c-75557d0784ea" (UID: "d1305e7b-ad6b-4155-be7c-75557d0784ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:24:06 crc kubenswrapper[4822]: I0224 09:24:06.065157 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1305e7b-ad6b-4155-be7c-75557d0784ea-kube-api-access-f6tj5" (OuterVolumeSpecName: "kube-api-access-f6tj5") pod "d1305e7b-ad6b-4155-be7c-75557d0784ea" (UID: "d1305e7b-ad6b-4155-be7c-75557d0784ea"). InnerVolumeSpecName "kube-api-access-f6tj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:24:06 crc kubenswrapper[4822]: I0224 09:24:06.161070 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1305e7b-ad6b-4155-be7c-75557d0784ea-catalog-content\") pod \"d1305e7b-ad6b-4155-be7c-75557d0784ea\" (UID: \"d1305e7b-ad6b-4155-be7c-75557d0784ea\") " Feb 24 09:24:06 crc kubenswrapper[4822]: I0224 09:24:06.162505 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6tj5\" (UniqueName: \"kubernetes.io/projected/d1305e7b-ad6b-4155-be7c-75557d0784ea-kube-api-access-f6tj5\") on node \"crc\" DevicePath \"\"" Feb 24 09:24:06 crc kubenswrapper[4822]: I0224 09:24:06.162541 4822 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1305e7b-ad6b-4155-be7c-75557d0784ea-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:24:06 crc kubenswrapper[4822]: I0224 09:24:06.234813 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1305e7b-ad6b-4155-be7c-75557d0784ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d1305e7b-ad6b-4155-be7c-75557d0784ea" (UID: "d1305e7b-ad6b-4155-be7c-75557d0784ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:24:06 crc kubenswrapper[4822]: I0224 09:24:06.265324 4822 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1305e7b-ad6b-4155-be7c-75557d0784ea-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:24:06 crc kubenswrapper[4822]: I0224 09:24:06.446455 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w2psg"] Feb 24 09:24:06 crc kubenswrapper[4822]: I0224 09:24:06.534470 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2psg" event={"ID":"e353a9a9-56ef-4144-bad9-72de87e2fe57","Type":"ContainerStarted","Data":"5fd8a3793b8519bd01c71efbf83931f4cc4ba3210ecc29d8ca23f5767a8d03ff"} Feb 24 09:24:06 crc kubenswrapper[4822]: I0224 09:24:06.536436 4822 generic.go:334] "Generic (PLEG): container finished" podID="d1305e7b-ad6b-4155-be7c-75557d0784ea" containerID="25fd59b24c5477f8d236bd4840ce6a52f5c364279b60e8e85f2fafac55343008" exitCode=0 Feb 24 09:24:06 crc kubenswrapper[4822]: I0224 09:24:06.536477 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nwqnh" event={"ID":"d1305e7b-ad6b-4155-be7c-75557d0784ea","Type":"ContainerDied","Data":"25fd59b24c5477f8d236bd4840ce6a52f5c364279b60e8e85f2fafac55343008"} Feb 24 09:24:06 crc kubenswrapper[4822]: I0224 09:24:06.536487 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nwqnh" Feb 24 09:24:06 crc kubenswrapper[4822]: I0224 09:24:06.536504 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nwqnh" event={"ID":"d1305e7b-ad6b-4155-be7c-75557d0784ea","Type":"ContainerDied","Data":"e210ac954e64cec2370362ed990dffb4ddc7ece44eb5674f604884b787b72c89"} Feb 24 09:24:06 crc kubenswrapper[4822]: I0224 09:24:06.536521 4822 scope.go:117] "RemoveContainer" containerID="25fd59b24c5477f8d236bd4840ce6a52f5c364279b60e8e85f2fafac55343008" Feb 24 09:24:06 crc kubenswrapper[4822]: I0224 09:24:06.563679 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nwqnh"] Feb 24 09:24:06 crc kubenswrapper[4822]: I0224 09:24:06.565969 4822 scope.go:117] "RemoveContainer" containerID="379deb56eefed963b3d53a22a7239d36c16be81177f8744d79040e642795533c" Feb 24 09:24:06 crc kubenswrapper[4822]: I0224 09:24:06.570716 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nwqnh"] Feb 24 09:24:06 crc kubenswrapper[4822]: I0224 09:24:06.586556 4822 scope.go:117] "RemoveContainer" containerID="845259b185e49dbcbfc48fd18fb76ac2f27f2386cf41f21af86dc066d4dbd8f7" Feb 24 09:24:06 crc kubenswrapper[4822]: I0224 09:24:06.604722 4822 scope.go:117] "RemoveContainer" containerID="25fd59b24c5477f8d236bd4840ce6a52f5c364279b60e8e85f2fafac55343008" Feb 24 09:24:06 crc kubenswrapper[4822]: E0224 09:24:06.606153 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25fd59b24c5477f8d236bd4840ce6a52f5c364279b60e8e85f2fafac55343008\": container with ID starting with 25fd59b24c5477f8d236bd4840ce6a52f5c364279b60e8e85f2fafac55343008 not found: ID does not exist" containerID="25fd59b24c5477f8d236bd4840ce6a52f5c364279b60e8e85f2fafac55343008" Feb 24 09:24:06 crc kubenswrapper[4822]: I0224 09:24:06.606212 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25fd59b24c5477f8d236bd4840ce6a52f5c364279b60e8e85f2fafac55343008"} err="failed to get container status \"25fd59b24c5477f8d236bd4840ce6a52f5c364279b60e8e85f2fafac55343008\": rpc error: code = NotFound desc = could not find container \"25fd59b24c5477f8d236bd4840ce6a52f5c364279b60e8e85f2fafac55343008\": container with ID starting with 25fd59b24c5477f8d236bd4840ce6a52f5c364279b60e8e85f2fafac55343008 not found: ID does not exist" Feb 24 09:24:06 crc kubenswrapper[4822]: I0224 09:24:06.606243 4822 scope.go:117] "RemoveContainer" containerID="379deb56eefed963b3d53a22a7239d36c16be81177f8744d79040e642795533c" Feb 24 09:24:06 crc kubenswrapper[4822]: E0224 09:24:06.606611 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"379deb56eefed963b3d53a22a7239d36c16be81177f8744d79040e642795533c\": container with ID starting with 379deb56eefed963b3d53a22a7239d36c16be81177f8744d79040e642795533c not found: ID does not exist" containerID="379deb56eefed963b3d53a22a7239d36c16be81177f8744d79040e642795533c" Feb 24 09:24:06 crc kubenswrapper[4822]: I0224 09:24:06.606639 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"379deb56eefed963b3d53a22a7239d36c16be81177f8744d79040e642795533c"} err="failed to get container status \"379deb56eefed963b3d53a22a7239d36c16be81177f8744d79040e642795533c\": rpc error: code = NotFound desc = could not find container \"379deb56eefed963b3d53a22a7239d36c16be81177f8744d79040e642795533c\": container with ID starting with 379deb56eefed963b3d53a22a7239d36c16be81177f8744d79040e642795533c not found: ID does not exist" Feb 24 09:24:06 crc kubenswrapper[4822]: I0224 09:24:06.606657 4822 scope.go:117] "RemoveContainer" containerID="845259b185e49dbcbfc48fd18fb76ac2f27f2386cf41f21af86dc066d4dbd8f7" Feb 24 09:24:06 crc kubenswrapper[4822]: E0224 09:24:06.607084 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"845259b185e49dbcbfc48fd18fb76ac2f27f2386cf41f21af86dc066d4dbd8f7\": container with ID starting with 845259b185e49dbcbfc48fd18fb76ac2f27f2386cf41f21af86dc066d4dbd8f7 not found: ID does not exist" containerID="845259b185e49dbcbfc48fd18fb76ac2f27f2386cf41f21af86dc066d4dbd8f7" Feb 24 09:24:06 crc kubenswrapper[4822]: I0224 09:24:06.607121 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"845259b185e49dbcbfc48fd18fb76ac2f27f2386cf41f21af86dc066d4dbd8f7"} err="failed to get container status \"845259b185e49dbcbfc48fd18fb76ac2f27f2386cf41f21af86dc066d4dbd8f7\": rpc error: code = NotFound desc = could not find container \"845259b185e49dbcbfc48fd18fb76ac2f27f2386cf41f21af86dc066d4dbd8f7\": container with ID starting with 845259b185e49dbcbfc48fd18fb76ac2f27f2386cf41f21af86dc066d4dbd8f7 not found: ID does not exist" Feb 24 09:24:07 crc kubenswrapper[4822]: I0224 09:24:07.548070 4822 generic.go:334] "Generic (PLEG): container finished" podID="e353a9a9-56ef-4144-bad9-72de87e2fe57" containerID="def65a97e2acd594f1cfc7f8768c31f636f619e093cab56b7ef2948ce924770b" exitCode=0 Feb 24 09:24:07 crc kubenswrapper[4822]: I0224 09:24:07.548121 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2psg" event={"ID":"e353a9a9-56ef-4144-bad9-72de87e2fe57","Type":"ContainerDied","Data":"def65a97e2acd594f1cfc7f8768c31f636f619e093cab56b7ef2948ce924770b"} Feb 24 09:24:08 crc kubenswrapper[4822]: I0224 09:24:08.007896 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpzxhl" Feb 24 09:24:08 crc kubenswrapper[4822]: I0224 09:24:08.370626 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1305e7b-ad6b-4155-be7c-75557d0784ea" path="/var/lib/kubelet/pods/d1305e7b-ad6b-4155-be7c-75557d0784ea/volumes" Feb 24 09:24:08 crc kubenswrapper[4822]: I0224 09:24:08.557769 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2psg" event={"ID":"e353a9a9-56ef-4144-bad9-72de87e2fe57","Type":"ContainerStarted","Data":"9626e87eaedbc50a15ee97f27aec9097d283cb44bb65c9308449de1851f575e9"} Feb 24 09:24:08 crc kubenswrapper[4822]: I0224 09:24:08.766551 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-whn4k" Feb 24 09:24:08 crc kubenswrapper[4822]: I0224 09:24:08.825249 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-whn4k" Feb 24 09:24:09 crc kubenswrapper[4822]: I0224 09:24:09.568952 4822 generic.go:334] "Generic (PLEG): container finished" podID="e353a9a9-56ef-4144-bad9-72de87e2fe57" containerID="9626e87eaedbc50a15ee97f27aec9097d283cb44bb65c9308449de1851f575e9" exitCode=0 Feb 24 09:24:09 crc kubenswrapper[4822]: I0224 09:24:09.569015 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2psg" event={"ID":"e353a9a9-56ef-4144-bad9-72de87e2fe57","Type":"ContainerDied","Data":"9626e87eaedbc50a15ee97f27aec9097d283cb44bb65c9308449de1851f575e9"} Feb 24 09:24:10 crc kubenswrapper[4822]: I0224 09:24:10.580224 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2psg" event={"ID":"e353a9a9-56ef-4144-bad9-72de87e2fe57","Type":"ContainerStarted","Data":"1c2ccb4395437a387c9f64871ba8910d716eea851f4a4eb4a0ff05db5208636b"} Feb 24 09:24:10 crc kubenswrapper[4822]: I0224 09:24:10.610468 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w2psg" podStartSLOduration=3.176482479 podStartE2EDuration="5.61045256s" podCreationTimestamp="2026-02-24 09:24:05 +0000 UTC" firstStartedPulling="2026-02-24 09:24:07.550967246 +0000 UTC m=+969.938729834" lastFinishedPulling="2026-02-24 09:24:09.984937367 +0000 UTC m=+972.372699915" observedRunningTime="2026-02-24 09:24:10.605764367 +0000 UTC m=+972.993526915" watchObservedRunningTime="2026-02-24 09:24:10.61045256 +0000 UTC m=+972.998215108" Feb 24 09:24:11 crc kubenswrapper[4822]: I0224 09:24:11.615232 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-whn4k"] Feb 24 09:24:11 crc kubenswrapper[4822]: I0224 09:24:11.615756 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-whn4k" podUID="57f0fd57-b847-41dc-a2c4-1df82e246c65" containerName="registry-server" containerID="cri-o://c9c93b7189ac21f4f9ab8313d1bed1e4bdef58e1a1e64eae2d1c79447c00e7e7" gracePeriod=2 Feb 24 09:24:11 crc kubenswrapper[4822]: E0224 09:24:11.741661 4822 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57f0fd57_b847_41dc_a2c4_1df82e246c65.slice/crio-c9c93b7189ac21f4f9ab8313d1bed1e4bdef58e1a1e64eae2d1c79447c00e7e7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57f0fd57_b847_41dc_a2c4_1df82e246c65.slice/crio-conmon-c9c93b7189ac21f4f9ab8313d1bed1e4bdef58e1a1e64eae2d1c79447c00e7e7.scope\": RecentStats: unable to find data in memory cache]" Feb 24 09:24:12 crc kubenswrapper[4822]: I0224 09:24:12.073130 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-whn4k" Feb 24 09:24:12 crc kubenswrapper[4822]: I0224 09:24:12.250277 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57f0fd57-b847-41dc-a2c4-1df82e246c65-catalog-content\") pod \"57f0fd57-b847-41dc-a2c4-1df82e246c65\" (UID: \"57f0fd57-b847-41dc-a2c4-1df82e246c65\") " Feb 24 09:24:12 crc kubenswrapper[4822]: I0224 09:24:12.250396 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwhgw\" (UniqueName: \"kubernetes.io/projected/57f0fd57-b847-41dc-a2c4-1df82e246c65-kube-api-access-vwhgw\") pod \"57f0fd57-b847-41dc-a2c4-1df82e246c65\" (UID: \"57f0fd57-b847-41dc-a2c4-1df82e246c65\") " Feb 24 09:24:12 crc kubenswrapper[4822]: I0224 09:24:12.250468 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57f0fd57-b847-41dc-a2c4-1df82e246c65-utilities\") pod \"57f0fd57-b847-41dc-a2c4-1df82e246c65\" (UID: \"57f0fd57-b847-41dc-a2c4-1df82e246c65\") " Feb 24 09:24:12 crc kubenswrapper[4822]: I0224 09:24:12.251412 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57f0fd57-b847-41dc-a2c4-1df82e246c65-utilities" (OuterVolumeSpecName: "utilities") pod "57f0fd57-b847-41dc-a2c4-1df82e246c65" (UID: "57f0fd57-b847-41dc-a2c4-1df82e246c65"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:24:12 crc kubenswrapper[4822]: I0224 09:24:12.257153 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57f0fd57-b847-41dc-a2c4-1df82e246c65-kube-api-access-vwhgw" (OuterVolumeSpecName: "kube-api-access-vwhgw") pod "57f0fd57-b847-41dc-a2c4-1df82e246c65" (UID: "57f0fd57-b847-41dc-a2c4-1df82e246c65"). InnerVolumeSpecName "kube-api-access-vwhgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:24:12 crc kubenswrapper[4822]: I0224 09:24:12.278972 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57f0fd57-b847-41dc-a2c4-1df82e246c65-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57f0fd57-b847-41dc-a2c4-1df82e246c65" (UID: "57f0fd57-b847-41dc-a2c4-1df82e246c65"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:24:12 crc kubenswrapper[4822]: I0224 09:24:12.352194 4822 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57f0fd57-b847-41dc-a2c4-1df82e246c65-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:24:12 crc kubenswrapper[4822]: I0224 09:24:12.352238 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwhgw\" (UniqueName: \"kubernetes.io/projected/57f0fd57-b847-41dc-a2c4-1df82e246c65-kube-api-access-vwhgw\") on node \"crc\" DevicePath \"\"" Feb 24 09:24:12 crc kubenswrapper[4822]: I0224 09:24:12.352259 4822 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57f0fd57-b847-41dc-a2c4-1df82e246c65-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:24:12 crc kubenswrapper[4822]: I0224 09:24:12.601464 4822 generic.go:334] "Generic (PLEG): container finished" podID="57f0fd57-b847-41dc-a2c4-1df82e246c65" containerID="c9c93b7189ac21f4f9ab8313d1bed1e4bdef58e1a1e64eae2d1c79447c00e7e7" exitCode=0 Feb 24 09:24:12 crc kubenswrapper[4822]: I0224 09:24:12.601537 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whn4k" event={"ID":"57f0fd57-b847-41dc-a2c4-1df82e246c65","Type":"ContainerDied","Data":"c9c93b7189ac21f4f9ab8313d1bed1e4bdef58e1a1e64eae2d1c79447c00e7e7"} Feb 24 09:24:12 crc kubenswrapper[4822]: I0224 09:24:12.601541 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-whn4k" Feb 24 09:24:12 crc kubenswrapper[4822]: I0224 09:24:12.601569 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whn4k" event={"ID":"57f0fd57-b847-41dc-a2c4-1df82e246c65","Type":"ContainerDied","Data":"04deed5d3208e3f31f76386a1386b2d4e9400a099713ef05efa21fa8bfbb59c7"} Feb 24 09:24:12 crc kubenswrapper[4822]: I0224 09:24:12.601588 4822 scope.go:117] "RemoveContainer" containerID="c9c93b7189ac21f4f9ab8313d1bed1e4bdef58e1a1e64eae2d1c79447c00e7e7" Feb 24 09:24:12 crc kubenswrapper[4822]: I0224 09:24:12.628124 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-whn4k"] Feb 24 09:24:12 crc kubenswrapper[4822]: I0224 09:24:12.631316 4822 scope.go:117] "RemoveContainer" containerID="bf1d5903f39fc46c60df06372a4c06259b4c218a2c2f9cc4bda2a98d7d7100d8" Feb 24 09:24:12 crc kubenswrapper[4822]: I0224 09:24:12.636206 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-whn4k"] Feb 24 09:24:12 crc kubenswrapper[4822]: I0224 09:24:12.655630 4822 scope.go:117] "RemoveContainer" containerID="d00bd66ccdb3c7ad56e6987069b4ce95d927cb815ccb231af5c38e41191325af" Feb 24 09:24:12 crc kubenswrapper[4822]: I0224 09:24:12.693653 4822 scope.go:117] "RemoveContainer" containerID="c9c93b7189ac21f4f9ab8313d1bed1e4bdef58e1a1e64eae2d1c79447c00e7e7" Feb 24 09:24:12 crc kubenswrapper[4822]: E0224 09:24:12.694209 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9c93b7189ac21f4f9ab8313d1bed1e4bdef58e1a1e64eae2d1c79447c00e7e7\": container with ID starting with c9c93b7189ac21f4f9ab8313d1bed1e4bdef58e1a1e64eae2d1c79447c00e7e7 not found: ID does not exist" containerID="c9c93b7189ac21f4f9ab8313d1bed1e4bdef58e1a1e64eae2d1c79447c00e7e7" Feb 24 09:24:12 crc kubenswrapper[4822]: I0224 09:24:12.694260 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9c93b7189ac21f4f9ab8313d1bed1e4bdef58e1a1e64eae2d1c79447c00e7e7"} err="failed to get container status \"c9c93b7189ac21f4f9ab8313d1bed1e4bdef58e1a1e64eae2d1c79447c00e7e7\": rpc error: code = NotFound desc = could not find container \"c9c93b7189ac21f4f9ab8313d1bed1e4bdef58e1a1e64eae2d1c79447c00e7e7\": container with ID starting with c9c93b7189ac21f4f9ab8313d1bed1e4bdef58e1a1e64eae2d1c79447c00e7e7 not found: ID does not exist" Feb 24 09:24:12 crc kubenswrapper[4822]: I0224 09:24:12.694290 4822 scope.go:117] "RemoveContainer" containerID="bf1d5903f39fc46c60df06372a4c06259b4c218a2c2f9cc4bda2a98d7d7100d8" Feb 24 09:24:12 crc kubenswrapper[4822]: E0224 09:24:12.695026 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf1d5903f39fc46c60df06372a4c06259b4c218a2c2f9cc4bda2a98d7d7100d8\": container with ID starting with bf1d5903f39fc46c60df06372a4c06259b4c218a2c2f9cc4bda2a98d7d7100d8 not found: ID does not exist" containerID="bf1d5903f39fc46c60df06372a4c06259b4c218a2c2f9cc4bda2a98d7d7100d8" Feb 24 09:24:12 crc kubenswrapper[4822]: I0224 09:24:12.695201 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf1d5903f39fc46c60df06372a4c06259b4c218a2c2f9cc4bda2a98d7d7100d8"} err="failed to get container status \"bf1d5903f39fc46c60df06372a4c06259b4c218a2c2f9cc4bda2a98d7d7100d8\": rpc error: code = NotFound desc = could not find container \"bf1d5903f39fc46c60df06372a4c06259b4c218a2c2f9cc4bda2a98d7d7100d8\": container with ID starting with bf1d5903f39fc46c60df06372a4c06259b4c218a2c2f9cc4bda2a98d7d7100d8 not found: ID does not exist" Feb 24 09:24:12 crc kubenswrapper[4822]: I0224 09:24:12.695293 4822 scope.go:117] "RemoveContainer" containerID="d00bd66ccdb3c7ad56e6987069b4ce95d927cb815ccb231af5c38e41191325af" Feb 24 09:24:12 crc kubenswrapper[4822]: E0224 09:24:12.695732 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d00bd66ccdb3c7ad56e6987069b4ce95d927cb815ccb231af5c38e41191325af\": container with ID starting with d00bd66ccdb3c7ad56e6987069b4ce95d927cb815ccb231af5c38e41191325af not found: ID does not exist" containerID="d00bd66ccdb3c7ad56e6987069b4ce95d927cb815ccb231af5c38e41191325af" Feb 24 09:24:12 crc kubenswrapper[4822]: I0224 09:24:12.695762 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d00bd66ccdb3c7ad56e6987069b4ce95d927cb815ccb231af5c38e41191325af"} err="failed to get container status \"d00bd66ccdb3c7ad56e6987069b4ce95d927cb815ccb231af5c38e41191325af\": rpc error: code = NotFound desc = could not find container \"d00bd66ccdb3c7ad56e6987069b4ce95d927cb815ccb231af5c38e41191325af\": container with ID starting with d00bd66ccdb3c7ad56e6987069b4ce95d927cb815ccb231af5c38e41191325af not found: ID does not exist" Feb 24 09:24:14 crc kubenswrapper[4822]: I0224 09:24:14.348329 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57f0fd57-b847-41dc-a2c4-1df82e246c65" path="/var/lib/kubelet/pods/57f0fd57-b847-41dc-a2c4-1df82e246c65/volumes" Feb 24 09:24:15 crc kubenswrapper[4822]: I0224 09:24:15.676872 4822 patch_prober.go:28] interesting pod/machine-config-daemon-qd752 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:24:15 crc kubenswrapper[4822]: I0224 09:24:15.676976 4822 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:24:16 crc kubenswrapper[4822]: I0224 09:24:16.000965 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w2psg" Feb 24 09:24:16 crc kubenswrapper[4822]: I0224 09:24:16.001026 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-w2psg" Feb 24 09:24:17 crc kubenswrapper[4822]: I0224 09:24:17.072141 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w2psg" podUID="e353a9a9-56ef-4144-bad9-72de87e2fe57" containerName="registry-server" probeResult="failure" output=< Feb 24 09:24:17 crc kubenswrapper[4822]: timeout: failed to connect service ":50051" within 1s Feb 24 09:24:17 crc kubenswrapper[4822]: > Feb 24 09:24:25 crc kubenswrapper[4822]: I0224 09:24:25.725517 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-f6v77"] Feb 24 09:24:25 crc kubenswrapper[4822]: E0224 09:24:25.726525 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1305e7b-ad6b-4155-be7c-75557d0784ea" containerName="extract-content" Feb 24 09:24:25 crc kubenswrapper[4822]: I0224 09:24:25.726541 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1305e7b-ad6b-4155-be7c-75557d0784ea" containerName="extract-content" Feb 24 09:24:25 crc kubenswrapper[4822]: E0224 09:24:25.726555 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1305e7b-ad6b-4155-be7c-75557d0784ea" containerName="extract-utilities" Feb 24 09:24:25 crc kubenswrapper[4822]: I0224 09:24:25.726564 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1305e7b-ad6b-4155-be7c-75557d0784ea" containerName="extract-utilities" Feb 24 09:24:25 crc kubenswrapper[4822]: E0224 09:24:25.726580 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57f0fd57-b847-41dc-a2c4-1df82e246c65" containerName="extract-content" Feb 24 09:24:25 crc kubenswrapper[4822]: I0224 09:24:25.726589 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="57f0fd57-b847-41dc-a2c4-1df82e246c65" containerName="extract-content" Feb 24 09:24:25 crc kubenswrapper[4822]: E0224 09:24:25.726602 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57f0fd57-b847-41dc-a2c4-1df82e246c65" containerName="extract-utilities" Feb 24 09:24:25 crc kubenswrapper[4822]: I0224 09:24:25.726611 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="57f0fd57-b847-41dc-a2c4-1df82e246c65" containerName="extract-utilities" Feb 24 09:24:25 crc kubenswrapper[4822]: E0224 09:24:25.726631 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57f0fd57-b847-41dc-a2c4-1df82e246c65" containerName="registry-server" Feb 24 09:24:25 crc kubenswrapper[4822]: I0224 09:24:25.726640 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="57f0fd57-b847-41dc-a2c4-1df82e246c65" containerName="registry-server" Feb 24 09:24:25 crc kubenswrapper[4822]: E0224 09:24:25.726653 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1305e7b-ad6b-4155-be7c-75557d0784ea" containerName="registry-server" Feb 24 09:24:25 crc kubenswrapper[4822]: I0224 09:24:25.726662 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1305e7b-ad6b-4155-be7c-75557d0784ea" containerName="registry-server" Feb 24 09:24:25 crc kubenswrapper[4822]: I0224 09:24:25.731680 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1305e7b-ad6b-4155-be7c-75557d0784ea" containerName="registry-server" Feb 24 09:24:25 crc kubenswrapper[4822]: I0224 09:24:25.731715 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="57f0fd57-b847-41dc-a2c4-1df82e246c65" containerName="registry-server" Feb 24 09:24:25 crc kubenswrapper[4822]: I0224 09:24:25.732644 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-f6v77" Feb 24 09:24:25 crc kubenswrapper[4822]: I0224 09:24:25.733796 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-f6v77"] Feb 24 09:24:25 crc kubenswrapper[4822]: I0224 09:24:25.734421 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-8gxrc" Feb 24 09:24:25 crc kubenswrapper[4822]: I0224 09:24:25.734724 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 24 09:24:25 crc kubenswrapper[4822]: I0224 09:24:25.734986 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 24 09:24:25 crc kubenswrapper[4822]: I0224 09:24:25.735807 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 24 09:24:25 crc kubenswrapper[4822]: I0224 09:24:25.805660 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-h2nqd"] Feb 24 09:24:25 crc kubenswrapper[4822]: I0224 09:24:25.814146 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-h2nqd" Feb 24 09:24:25 crc kubenswrapper[4822]: I0224 09:24:25.818976 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 24 09:24:25 crc kubenswrapper[4822]: I0224 09:24:25.822327 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-h2nqd"] Feb 24 09:24:25 crc kubenswrapper[4822]: I0224 09:24:25.862236 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tblc5\" (UniqueName: \"kubernetes.io/projected/e2e5df84-08e0-467b-bde7-18f191e1bcec-kube-api-access-tblc5\") pod \"dnsmasq-dns-675f4bcbfc-f6v77\" (UID: \"e2e5df84-08e0-467b-bde7-18f191e1bcec\") " pod="openstack/dnsmasq-dns-675f4bcbfc-f6v77" Feb 24 09:24:25 crc kubenswrapper[4822]: I0224 09:24:25.862330 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2e5df84-08e0-467b-bde7-18f191e1bcec-config\") pod \"dnsmasq-dns-675f4bcbfc-f6v77\" (UID: \"e2e5df84-08e0-467b-bde7-18f191e1bcec\") " pod="openstack/dnsmasq-dns-675f4bcbfc-f6v77" Feb 24 09:24:25 crc kubenswrapper[4822]: I0224 09:24:25.963429 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tblc5\" (UniqueName: \"kubernetes.io/projected/e2e5df84-08e0-467b-bde7-18f191e1bcec-kube-api-access-tblc5\") pod \"dnsmasq-dns-675f4bcbfc-f6v77\" (UID: \"e2e5df84-08e0-467b-bde7-18f191e1bcec\") " pod="openstack/dnsmasq-dns-675f4bcbfc-f6v77" Feb 24 09:24:25 crc kubenswrapper[4822]: I0224 09:24:25.963475 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2aa41337-70e9-4de2-a201-35da03417af7-config\") pod \"dnsmasq-dns-78dd6ddcc-h2nqd\" (UID: \"2aa41337-70e9-4de2-a201-35da03417af7\") " pod="openstack/dnsmasq-dns-78dd6ddcc-h2nqd" Feb 24 09:24:25 crc kubenswrapper[4822]: I0224 09:24:25.963503 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2e5df84-08e0-467b-bde7-18f191e1bcec-config\") pod \"dnsmasq-dns-675f4bcbfc-f6v77\" (UID: \"e2e5df84-08e0-467b-bde7-18f191e1bcec\") " pod="openstack/dnsmasq-dns-675f4bcbfc-f6v77" Feb 24 09:24:25 crc kubenswrapper[4822]: I0224 09:24:25.963534 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhj6s\" (UniqueName: \"kubernetes.io/projected/2aa41337-70e9-4de2-a201-35da03417af7-kube-api-access-xhj6s\") pod \"dnsmasq-dns-78dd6ddcc-h2nqd\" (UID: \"2aa41337-70e9-4de2-a201-35da03417af7\") " pod="openstack/dnsmasq-dns-78dd6ddcc-h2nqd" Feb 24 09:24:25 crc kubenswrapper[4822]: I0224 09:24:25.963567 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2aa41337-70e9-4de2-a201-35da03417af7-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-h2nqd\" (UID: \"2aa41337-70e9-4de2-a201-35da03417af7\") " pod="openstack/dnsmasq-dns-78dd6ddcc-h2nqd" Feb 24 09:24:25 crc kubenswrapper[4822]: I0224 09:24:25.964368 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2e5df84-08e0-467b-bde7-18f191e1bcec-config\") pod \"dnsmasq-dns-675f4bcbfc-f6v77\" (UID: \"e2e5df84-08e0-467b-bde7-18f191e1bcec\") " pod="openstack/dnsmasq-dns-675f4bcbfc-f6v77" Feb 24 09:24:25 crc kubenswrapper[4822]: I0224 09:24:25.983011 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tblc5\" (UniqueName: \"kubernetes.io/projected/e2e5df84-08e0-467b-bde7-18f191e1bcec-kube-api-access-tblc5\") pod \"dnsmasq-dns-675f4bcbfc-f6v77\" (UID: \"e2e5df84-08e0-467b-bde7-18f191e1bcec\") " pod="openstack/dnsmasq-dns-675f4bcbfc-f6v77" Feb 24 09:24:26 crc kubenswrapper[4822]: I0224 09:24:26.050891 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-f6v77" Feb 24 09:24:26 crc kubenswrapper[4822]: I0224 09:24:26.066275 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2aa41337-70e9-4de2-a201-35da03417af7-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-h2nqd\" (UID: \"2aa41337-70e9-4de2-a201-35da03417af7\") " pod="openstack/dnsmasq-dns-78dd6ddcc-h2nqd" Feb 24 09:24:26 crc kubenswrapper[4822]: I0224 09:24:26.066387 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2aa41337-70e9-4de2-a201-35da03417af7-config\") pod \"dnsmasq-dns-78dd6ddcc-h2nqd\" (UID: \"2aa41337-70e9-4de2-a201-35da03417af7\") " pod="openstack/dnsmasq-dns-78dd6ddcc-h2nqd" Feb 24 09:24:26 crc kubenswrapper[4822]: I0224 09:24:26.066437 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhj6s\" (UniqueName: \"kubernetes.io/projected/2aa41337-70e9-4de2-a201-35da03417af7-kube-api-access-xhj6s\") pod \"dnsmasq-dns-78dd6ddcc-h2nqd\" (UID: \"2aa41337-70e9-4de2-a201-35da03417af7\") " pod="openstack/dnsmasq-dns-78dd6ddcc-h2nqd" Feb 24 09:24:26 crc kubenswrapper[4822]: I0224 09:24:26.067582 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2aa41337-70e9-4de2-a201-35da03417af7-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-h2nqd\" (UID: \"2aa41337-70e9-4de2-a201-35da03417af7\") " pod="openstack/dnsmasq-dns-78dd6ddcc-h2nqd" Feb 24 09:24:26 crc kubenswrapper[4822]: I0224 09:24:26.069191 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2aa41337-70e9-4de2-a201-35da03417af7-config\") pod \"dnsmasq-dns-78dd6ddcc-h2nqd\" (UID: \"2aa41337-70e9-4de2-a201-35da03417af7\") " pod="openstack/dnsmasq-dns-78dd6ddcc-h2nqd" Feb 24 09:24:26 crc kubenswrapper[4822]: I0224 09:24:26.074573 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w2psg" Feb 24 09:24:26 crc kubenswrapper[4822]: I0224 09:24:26.110268 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhj6s\" (UniqueName: \"kubernetes.io/projected/2aa41337-70e9-4de2-a201-35da03417af7-kube-api-access-xhj6s\") pod \"dnsmasq-dns-78dd6ddcc-h2nqd\" (UID: \"2aa41337-70e9-4de2-a201-35da03417af7\") " pod="openstack/dnsmasq-dns-78dd6ddcc-h2nqd" Feb 24 09:24:26 crc kubenswrapper[4822]: I0224 09:24:26.129105 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w2psg" Feb 24 09:24:26 crc kubenswrapper[4822]: I0224 09:24:26.143054 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-h2nqd" Feb 24 09:24:26 crc kubenswrapper[4822]: I0224 09:24:26.323894 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w2psg"] Feb 24 09:24:26 crc kubenswrapper[4822]: I0224 09:24:26.527653 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-f6v77"] Feb 24 09:24:26 crc kubenswrapper[4822]: I0224 09:24:26.534046 4822 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 09:24:26 crc kubenswrapper[4822]: I0224 09:24:26.631945 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-h2nqd"] Feb 24 09:24:26 crc kubenswrapper[4822]: I0224 09:24:26.738452 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-h2nqd" event={"ID":"2aa41337-70e9-4de2-a201-35da03417af7","Type":"ContainerStarted","Data":"a495480aa0ada8dc7b3a34e6106fbd0dade4a515b59a2d3d4f6662f40aeb3a59"} Feb 24 09:24:26 crc kubenswrapper[4822]: I0224 09:24:26.739576 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-f6v77" event={"ID":"e2e5df84-08e0-467b-bde7-18f191e1bcec","Type":"ContainerStarted","Data":"2c5292b8ee047ed46913eb675c0f2fced6316197f85dd99f374a03f4546df187"} Feb 24 09:24:27 crc kubenswrapper[4822]: I0224 09:24:27.778472 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-w2psg" podUID="e353a9a9-56ef-4144-bad9-72de87e2fe57" containerName="registry-server" containerID="cri-o://1c2ccb4395437a387c9f64871ba8910d716eea851f4a4eb4a0ff05db5208636b" gracePeriod=2 Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.181584 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w2psg" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.314742 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e353a9a9-56ef-4144-bad9-72de87e2fe57-catalog-content\") pod \"e353a9a9-56ef-4144-bad9-72de87e2fe57\" (UID: \"e353a9a9-56ef-4144-bad9-72de87e2fe57\") " Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.314839 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snj4h\" (UniqueName: \"kubernetes.io/projected/e353a9a9-56ef-4144-bad9-72de87e2fe57-kube-api-access-snj4h\") pod \"e353a9a9-56ef-4144-bad9-72de87e2fe57\" (UID: \"e353a9a9-56ef-4144-bad9-72de87e2fe57\") " Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.314890 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e353a9a9-56ef-4144-bad9-72de87e2fe57-utilities\") pod \"e353a9a9-56ef-4144-bad9-72de87e2fe57\" (UID: \"e353a9a9-56ef-4144-bad9-72de87e2fe57\") " Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.315837 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e353a9a9-56ef-4144-bad9-72de87e2fe57-utilities" (OuterVolumeSpecName: "utilities") pod "e353a9a9-56ef-4144-bad9-72de87e2fe57" (UID: "e353a9a9-56ef-4144-bad9-72de87e2fe57"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.320245 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e353a9a9-56ef-4144-bad9-72de87e2fe57-kube-api-access-snj4h" (OuterVolumeSpecName: "kube-api-access-snj4h") pod "e353a9a9-56ef-4144-bad9-72de87e2fe57" (UID: "e353a9a9-56ef-4144-bad9-72de87e2fe57"). InnerVolumeSpecName "kube-api-access-snj4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.418814 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snj4h\" (UniqueName: \"kubernetes.io/projected/e353a9a9-56ef-4144-bad9-72de87e2fe57-kube-api-access-snj4h\") on node \"crc\" DevicePath \"\"" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.418846 4822 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e353a9a9-56ef-4144-bad9-72de87e2fe57-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.481193 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-f6v77"] Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.493136 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e353a9a9-56ef-4144-bad9-72de87e2fe57-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e353a9a9-56ef-4144-bad9-72de87e2fe57" (UID: "e353a9a9-56ef-4144-bad9-72de87e2fe57"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.503571 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7zz57"] Feb 24 09:24:28 crc kubenswrapper[4822]: E0224 09:24:28.503880 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e353a9a9-56ef-4144-bad9-72de87e2fe57" containerName="extract-content" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.503901 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="e353a9a9-56ef-4144-bad9-72de87e2fe57" containerName="extract-content" Feb 24 09:24:28 crc kubenswrapper[4822]: E0224 09:24:28.503927 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e353a9a9-56ef-4144-bad9-72de87e2fe57" containerName="registry-server" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.503934 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="e353a9a9-56ef-4144-bad9-72de87e2fe57" containerName="registry-server" Feb 24 09:24:28 crc kubenswrapper[4822]: E0224 09:24:28.503944 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e353a9a9-56ef-4144-bad9-72de87e2fe57" containerName="extract-utilities" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.503950 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="e353a9a9-56ef-4144-bad9-72de87e2fe57" containerName="extract-utilities" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.504072 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="e353a9a9-56ef-4144-bad9-72de87e2fe57" containerName="registry-server" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.504744 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-7zz57" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.512633 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7zz57"] Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.519846 4822 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e353a9a9-56ef-4144-bad9-72de87e2fe57-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.620586 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccca9e57-53c8-4d44-b4e9-2a6b4de5b566-config\") pod \"dnsmasq-dns-666b6646f7-7zz57\" (UID: \"ccca9e57-53c8-4d44-b4e9-2a6b4de5b566\") " pod="openstack/dnsmasq-dns-666b6646f7-7zz57" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.620632 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46v4r\" (UniqueName: \"kubernetes.io/projected/ccca9e57-53c8-4d44-b4e9-2a6b4de5b566-kube-api-access-46v4r\") pod \"dnsmasq-dns-666b6646f7-7zz57\" (UID: \"ccca9e57-53c8-4d44-b4e9-2a6b4de5b566\") " pod="openstack/dnsmasq-dns-666b6646f7-7zz57" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.620684 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccca9e57-53c8-4d44-b4e9-2a6b4de5b566-dns-svc\") pod \"dnsmasq-dns-666b6646f7-7zz57\" (UID: \"ccca9e57-53c8-4d44-b4e9-2a6b4de5b566\") " pod="openstack/dnsmasq-dns-666b6646f7-7zz57" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.719408 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-h2nqd"] Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.721865 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccca9e57-53c8-4d44-b4e9-2a6b4de5b566-config\") pod \"dnsmasq-dns-666b6646f7-7zz57\" (UID: \"ccca9e57-53c8-4d44-b4e9-2a6b4de5b566\") " pod="openstack/dnsmasq-dns-666b6646f7-7zz57" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.721929 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46v4r\" (UniqueName: \"kubernetes.io/projected/ccca9e57-53c8-4d44-b4e9-2a6b4de5b566-kube-api-access-46v4r\") pod \"dnsmasq-dns-666b6646f7-7zz57\" (UID: \"ccca9e57-53c8-4d44-b4e9-2a6b4de5b566\") " pod="openstack/dnsmasq-dns-666b6646f7-7zz57" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.721982 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccca9e57-53c8-4d44-b4e9-2a6b4de5b566-dns-svc\") pod \"dnsmasq-dns-666b6646f7-7zz57\" (UID: \"ccca9e57-53c8-4d44-b4e9-2a6b4de5b566\") " pod="openstack/dnsmasq-dns-666b6646f7-7zz57" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.722639 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccca9e57-53c8-4d44-b4e9-2a6b4de5b566-config\") pod \"dnsmasq-dns-666b6646f7-7zz57\" (UID: \"ccca9e57-53c8-4d44-b4e9-2a6b4de5b566\") " pod="openstack/dnsmasq-dns-666b6646f7-7zz57" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.722673 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccca9e57-53c8-4d44-b4e9-2a6b4de5b566-dns-svc\") pod \"dnsmasq-dns-666b6646f7-7zz57\" (UID: \"ccca9e57-53c8-4d44-b4e9-2a6b4de5b566\") " pod="openstack/dnsmasq-dns-666b6646f7-7zz57" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.737988 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-z69ts"] Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.739326 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-z69ts" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.741776 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46v4r\" (UniqueName: \"kubernetes.io/projected/ccca9e57-53c8-4d44-b4e9-2a6b4de5b566-kube-api-access-46v4r\") pod \"dnsmasq-dns-666b6646f7-7zz57\" (UID: \"ccca9e57-53c8-4d44-b4e9-2a6b4de5b566\") " pod="openstack/dnsmasq-dns-666b6646f7-7zz57" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.758102 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-z69ts"] Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.795052 4822 generic.go:334] "Generic (PLEG): container finished" podID="e353a9a9-56ef-4144-bad9-72de87e2fe57" containerID="1c2ccb4395437a387c9f64871ba8910d716eea851f4a4eb4a0ff05db5208636b" exitCode=0 Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.795211 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w2psg" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.795098 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2psg" event={"ID":"e353a9a9-56ef-4144-bad9-72de87e2fe57","Type":"ContainerDied","Data":"1c2ccb4395437a387c9f64871ba8910d716eea851f4a4eb4a0ff05db5208636b"} Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.795310 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2psg" event={"ID":"e353a9a9-56ef-4144-bad9-72de87e2fe57","Type":"ContainerDied","Data":"5fd8a3793b8519bd01c71efbf83931f4cc4ba3210ecc29d8ca23f5767a8d03ff"} Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.795358 4822 scope.go:117] "RemoveContainer" containerID="1c2ccb4395437a387c9f64871ba8910d716eea851f4a4eb4a0ff05db5208636b" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.820142 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-7zz57" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.823362 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlnbd\" (UniqueName: \"kubernetes.io/projected/2153ae49-dd49-45fa-b8cc-00f2c44f744e-kube-api-access-hlnbd\") pod \"dnsmasq-dns-57d769cc4f-z69ts\" (UID: \"2153ae49-dd49-45fa-b8cc-00f2c44f744e\") " pod="openstack/dnsmasq-dns-57d769cc4f-z69ts" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.823498 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2153ae49-dd49-45fa-b8cc-00f2c44f744e-config\") pod \"dnsmasq-dns-57d769cc4f-z69ts\" (UID: \"2153ae49-dd49-45fa-b8cc-00f2c44f744e\") " pod="openstack/dnsmasq-dns-57d769cc4f-z69ts" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.823574 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2153ae49-dd49-45fa-b8cc-00f2c44f744e-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-z69ts\" (UID: \"2153ae49-dd49-45fa-b8cc-00f2c44f744e\") " pod="openstack/dnsmasq-dns-57d769cc4f-z69ts" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.844061 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w2psg"] Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.845517 4822 scope.go:117] "RemoveContainer" containerID="9626e87eaedbc50a15ee97f27aec9097d283cb44bb65c9308449de1851f575e9" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.849846 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-w2psg"] Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.918340 4822 scope.go:117] "RemoveContainer" containerID="def65a97e2acd594f1cfc7f8768c31f636f619e093cab56b7ef2948ce924770b" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.924906 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2153ae49-dd49-45fa-b8cc-00f2c44f744e-config\") pod \"dnsmasq-dns-57d769cc4f-z69ts\" (UID: \"2153ae49-dd49-45fa-b8cc-00f2c44f744e\") " pod="openstack/dnsmasq-dns-57d769cc4f-z69ts" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.925007 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2153ae49-dd49-45fa-b8cc-00f2c44f744e-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-z69ts\" (UID: \"2153ae49-dd49-45fa-b8cc-00f2c44f744e\") " pod="openstack/dnsmasq-dns-57d769cc4f-z69ts" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.925081 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlnbd\" (UniqueName: \"kubernetes.io/projected/2153ae49-dd49-45fa-b8cc-00f2c44f744e-kube-api-access-hlnbd\") pod \"dnsmasq-dns-57d769cc4f-z69ts\" (UID: \"2153ae49-dd49-45fa-b8cc-00f2c44f744e\") " pod="openstack/dnsmasq-dns-57d769cc4f-z69ts" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.925714 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2153ae49-dd49-45fa-b8cc-00f2c44f744e-config\") pod \"dnsmasq-dns-57d769cc4f-z69ts\" (UID: \"2153ae49-dd49-45fa-b8cc-00f2c44f744e\") " pod="openstack/dnsmasq-dns-57d769cc4f-z69ts" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.926562 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2153ae49-dd49-45fa-b8cc-00f2c44f744e-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-z69ts\" (UID: \"2153ae49-dd49-45fa-b8cc-00f2c44f744e\") " pod="openstack/dnsmasq-dns-57d769cc4f-z69ts" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.945545 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlnbd\" (UniqueName: \"kubernetes.io/projected/2153ae49-dd49-45fa-b8cc-00f2c44f744e-kube-api-access-hlnbd\") pod \"dnsmasq-dns-57d769cc4f-z69ts\" (UID: \"2153ae49-dd49-45fa-b8cc-00f2c44f744e\") " pod="openstack/dnsmasq-dns-57d769cc4f-z69ts" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.950031 4822 scope.go:117] "RemoveContainer" containerID="1c2ccb4395437a387c9f64871ba8910d716eea851f4a4eb4a0ff05db5208636b" Feb 24 09:24:28 crc kubenswrapper[4822]: E0224 09:24:28.951782 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c2ccb4395437a387c9f64871ba8910d716eea851f4a4eb4a0ff05db5208636b\": container with ID starting with 1c2ccb4395437a387c9f64871ba8910d716eea851f4a4eb4a0ff05db5208636b not found: ID does not exist" containerID="1c2ccb4395437a387c9f64871ba8910d716eea851f4a4eb4a0ff05db5208636b" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.951830 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c2ccb4395437a387c9f64871ba8910d716eea851f4a4eb4a0ff05db5208636b"} err="failed to get container status \"1c2ccb4395437a387c9f64871ba8910d716eea851f4a4eb4a0ff05db5208636b\": rpc error: code = NotFound desc = could not find container \"1c2ccb4395437a387c9f64871ba8910d716eea851f4a4eb4a0ff05db5208636b\": container with ID starting with 1c2ccb4395437a387c9f64871ba8910d716eea851f4a4eb4a0ff05db5208636b not found: ID does not exist" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.951868 4822 scope.go:117] "RemoveContainer" containerID="9626e87eaedbc50a15ee97f27aec9097d283cb44bb65c9308449de1851f575e9" Feb 24 09:24:28 crc kubenswrapper[4822]: E0224 09:24:28.952444 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9626e87eaedbc50a15ee97f27aec9097d283cb44bb65c9308449de1851f575e9\": container with ID starting with 9626e87eaedbc50a15ee97f27aec9097d283cb44bb65c9308449de1851f575e9 not found: ID does not exist" containerID="9626e87eaedbc50a15ee97f27aec9097d283cb44bb65c9308449de1851f575e9" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.952493 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9626e87eaedbc50a15ee97f27aec9097d283cb44bb65c9308449de1851f575e9"} err="failed to get container status \"9626e87eaedbc50a15ee97f27aec9097d283cb44bb65c9308449de1851f575e9\": rpc error: code = NotFound desc = could not find container \"9626e87eaedbc50a15ee97f27aec9097d283cb44bb65c9308449de1851f575e9\": container with ID starting with 9626e87eaedbc50a15ee97f27aec9097d283cb44bb65c9308449de1851f575e9 not found: ID does not exist" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.952516 4822 scope.go:117] "RemoveContainer" containerID="def65a97e2acd594f1cfc7f8768c31f636f619e093cab56b7ef2948ce924770b" Feb 24 09:24:28 crc kubenswrapper[4822]: E0224 09:24:28.953767 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"def65a97e2acd594f1cfc7f8768c31f636f619e093cab56b7ef2948ce924770b\": container with ID starting with def65a97e2acd594f1cfc7f8768c31f636f619e093cab56b7ef2948ce924770b not found: ID does not exist" containerID="def65a97e2acd594f1cfc7f8768c31f636f619e093cab56b7ef2948ce924770b" Feb 24 09:24:28 crc kubenswrapper[4822]: I0224 09:24:28.953791 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"def65a97e2acd594f1cfc7f8768c31f636f619e093cab56b7ef2948ce924770b"} err="failed to get container status \"def65a97e2acd594f1cfc7f8768c31f636f619e093cab56b7ef2948ce924770b\": rpc error: code = NotFound desc = could not find container \"def65a97e2acd594f1cfc7f8768c31f636f619e093cab56b7ef2948ce924770b\": container with ID starting with def65a97e2acd594f1cfc7f8768c31f636f619e093cab56b7ef2948ce924770b not found: ID does not exist" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.075793 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-z69ts" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.450087 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7zz57"] Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.573416 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-z69ts"] Feb 24 09:24:29 crc kubenswrapper[4822]: W0224 09:24:29.580814 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2153ae49_dd49_45fa_b8cc_00f2c44f744e.slice/crio-f47652baa26e49aaad80a5f9c797514b5bf9124a78c4f29074d9339f96611eb7 WatchSource:0}: Error finding container f47652baa26e49aaad80a5f9c797514b5bf9124a78c4f29074d9339f96611eb7: Status 404 returned error can't find the container with id f47652baa26e49aaad80a5f9c797514b5bf9124a78c4f29074d9339f96611eb7 Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.635169 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.638254 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.642398 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.642603 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-tjftn" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.642728 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.642845 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.642933 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.643000 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.643420 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.653147 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.741131 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"f5d244a0-7be8-4ea4-b8aa-d4d461cb1146\") " pod="openstack/rabbitmq-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.741187 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f5d244a0-7be8-4ea4-b8aa-d4d461cb1146-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f5d244a0-7be8-4ea4-b8aa-d4d461cb1146\") " pod="openstack/rabbitmq-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.741211 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f5d244a0-7be8-4ea4-b8aa-d4d461cb1146-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f5d244a0-7be8-4ea4-b8aa-d4d461cb1146\") " pod="openstack/rabbitmq-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.741232 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgzvc\" (UniqueName: \"kubernetes.io/projected/f5d244a0-7be8-4ea4-b8aa-d4d461cb1146-kube-api-access-hgzvc\") pod \"rabbitmq-server-0\" (UID: \"f5d244a0-7be8-4ea4-b8aa-d4d461cb1146\") " pod="openstack/rabbitmq-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.741267 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f5d244a0-7be8-4ea4-b8aa-d4d461cb1146-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f5d244a0-7be8-4ea4-b8aa-d4d461cb1146\") " pod="openstack/rabbitmq-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.741567 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f5d244a0-7be8-4ea4-b8aa-d4d461cb1146-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f5d244a0-7be8-4ea4-b8aa-d4d461cb1146\") " pod="openstack/rabbitmq-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.741642 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f5d244a0-7be8-4ea4-b8aa-d4d461cb1146-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f5d244a0-7be8-4ea4-b8aa-d4d461cb1146\") " pod="openstack/rabbitmq-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.741692 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f5d244a0-7be8-4ea4-b8aa-d4d461cb1146-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f5d244a0-7be8-4ea4-b8aa-d4d461cb1146\") " pod="openstack/rabbitmq-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.741879 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f5d244a0-7be8-4ea4-b8aa-d4d461cb1146-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f5d244a0-7be8-4ea4-b8aa-d4d461cb1146\") " pod="openstack/rabbitmq-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.741981 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f5d244a0-7be8-4ea4-b8aa-d4d461cb1146-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f5d244a0-7be8-4ea4-b8aa-d4d461cb1146\") " pod="openstack/rabbitmq-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.742029 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f5d244a0-7be8-4ea4-b8aa-d4d461cb1146-config-data\") pod \"rabbitmq-server-0\" (UID: \"f5d244a0-7be8-4ea4-b8aa-d4d461cb1146\") " pod="openstack/rabbitmq-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.804096 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-7zz57" event={"ID":"ccca9e57-53c8-4d44-b4e9-2a6b4de5b566","Type":"ContainerStarted","Data":"ade4656d8ceca7149d5c0af9aa9e5c15f6d8a3f854427906fc49916cea97277b"} Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.805318 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-z69ts" event={"ID":"2153ae49-dd49-45fa-b8cc-00f2c44f744e","Type":"ContainerStarted","Data":"f47652baa26e49aaad80a5f9c797514b5bf9124a78c4f29074d9339f96611eb7"} Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.843494 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f5d244a0-7be8-4ea4-b8aa-d4d461cb1146-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f5d244a0-7be8-4ea4-b8aa-d4d461cb1146\") " pod="openstack/rabbitmq-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.843529 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f5d244a0-7be8-4ea4-b8aa-d4d461cb1146-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f5d244a0-7be8-4ea4-b8aa-d4d461cb1146\") " pod="openstack/rabbitmq-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.843553 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f5d244a0-7be8-4ea4-b8aa-d4d461cb1146-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f5d244a0-7be8-4ea4-b8aa-d4d461cb1146\") " pod="openstack/rabbitmq-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.843583 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f5d244a0-7be8-4ea4-b8aa-d4d461cb1146-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f5d244a0-7be8-4ea4-b8aa-d4d461cb1146\") " pod="openstack/rabbitmq-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.843601 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f5d244a0-7be8-4ea4-b8aa-d4d461cb1146-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f5d244a0-7be8-4ea4-b8aa-d4d461cb1146\") " pod="openstack/rabbitmq-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.843619 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f5d244a0-7be8-4ea4-b8aa-d4d461cb1146-config-data\") pod \"rabbitmq-server-0\" (UID: \"f5d244a0-7be8-4ea4-b8aa-d4d461cb1146\") " pod="openstack/rabbitmq-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.843682 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"f5d244a0-7be8-4ea4-b8aa-d4d461cb1146\") " pod="openstack/rabbitmq-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.843711 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f5d244a0-7be8-4ea4-b8aa-d4d461cb1146-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f5d244a0-7be8-4ea4-b8aa-d4d461cb1146\") " pod="openstack/rabbitmq-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.843729 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f5d244a0-7be8-4ea4-b8aa-d4d461cb1146-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f5d244a0-7be8-4ea4-b8aa-d4d461cb1146\") " pod="openstack/rabbitmq-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.843760 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgzvc\" (UniqueName: \"kubernetes.io/projected/f5d244a0-7be8-4ea4-b8aa-d4d461cb1146-kube-api-access-hgzvc\") pod \"rabbitmq-server-0\" (UID: \"f5d244a0-7be8-4ea4-b8aa-d4d461cb1146\") " pod="openstack/rabbitmq-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.843790 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f5d244a0-7be8-4ea4-b8aa-d4d461cb1146-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f5d244a0-7be8-4ea4-b8aa-d4d461cb1146\") " pod="openstack/rabbitmq-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.844143 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f5d244a0-7be8-4ea4-b8aa-d4d461cb1146-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f5d244a0-7be8-4ea4-b8aa-d4d461cb1146\") " pod="openstack/rabbitmq-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.844322 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f5d244a0-7be8-4ea4-b8aa-d4d461cb1146-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f5d244a0-7be8-4ea4-b8aa-d4d461cb1146\") " pod="openstack/rabbitmq-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.844868 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f5d244a0-7be8-4ea4-b8aa-d4d461cb1146-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f5d244a0-7be8-4ea4-b8aa-d4d461cb1146\") " pod="openstack/rabbitmq-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.844970 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f5d244a0-7be8-4ea4-b8aa-d4d461cb1146-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f5d244a0-7be8-4ea4-b8aa-d4d461cb1146\") " pod="openstack/rabbitmq-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.844980 4822 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"f5d244a0-7be8-4ea4-b8aa-d4d461cb1146\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.845110 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f5d244a0-7be8-4ea4-b8aa-d4d461cb1146-config-data\") pod \"rabbitmq-server-0\" (UID: \"f5d244a0-7be8-4ea4-b8aa-d4d461cb1146\") " pod="openstack/rabbitmq-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.853749 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f5d244a0-7be8-4ea4-b8aa-d4d461cb1146-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f5d244a0-7be8-4ea4-b8aa-d4d461cb1146\") " pod="openstack/rabbitmq-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.854086 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f5d244a0-7be8-4ea4-b8aa-d4d461cb1146-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f5d244a0-7be8-4ea4-b8aa-d4d461cb1146\") " pod="openstack/rabbitmq-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.854423 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f5d244a0-7be8-4ea4-b8aa-d4d461cb1146-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f5d244a0-7be8-4ea4-b8aa-d4d461cb1146\") " pod="openstack/rabbitmq-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.866880 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgzvc\" (UniqueName: \"kubernetes.io/projected/f5d244a0-7be8-4ea4-b8aa-d4d461cb1146-kube-api-access-hgzvc\") pod \"rabbitmq-server-0\" (UID: \"f5d244a0-7be8-4ea4-b8aa-d4d461cb1146\") " pod="openstack/rabbitmq-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.867577 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f5d244a0-7be8-4ea4-b8aa-d4d461cb1146-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f5d244a0-7be8-4ea4-b8aa-d4d461cb1146\") " pod="openstack/rabbitmq-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.878923 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.880674 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.887215 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.887290 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.887550 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.887613 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-kw5b2" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.895425 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.895448 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.900264 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.900788 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"f5d244a0-7be8-4ea4-b8aa-d4d461cb1146\") " pod="openstack/rabbitmq-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.929350 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.944717 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.944771 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.944796 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.944843 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.944870 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.945040 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.945126 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.945176 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.945207 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.945255 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnclj\" (UniqueName: \"kubernetes.io/projected/ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11-kube-api-access-hnclj\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.945374 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 09:24:29 crc kubenswrapper[4822]: I0224 09:24:29.965328 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 24 09:24:30 crc kubenswrapper[4822]: I0224 09:24:30.049649 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 09:24:30 crc kubenswrapper[4822]: I0224 09:24:30.049716 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 09:24:30 crc kubenswrapper[4822]: I0224 09:24:30.049761 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 09:24:30 crc kubenswrapper[4822]: I0224 09:24:30.049793 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 09:24:30 crc kubenswrapper[4822]: I0224 09:24:30.049823 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 09:24:30 crc kubenswrapper[4822]: I0224 09:24:30.049846 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 09:24:30 crc kubenswrapper[4822]: I0224 09:24:30.049877 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnclj\" (UniqueName: \"kubernetes.io/projected/ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11-kube-api-access-hnclj\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 09:24:30 crc kubenswrapper[4822]: I0224 09:24:30.049953 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 09:24:30 crc kubenswrapper[4822]: I0224 09:24:30.050044 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 09:24:30 crc kubenswrapper[4822]: I0224 09:24:30.050091 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 09:24:30 crc kubenswrapper[4822]: I0224 09:24:30.050129 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 09:24:30 crc kubenswrapper[4822]: I0224 09:24:30.051499 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 09:24:30 crc kubenswrapper[4822]: I0224 09:24:30.052055 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 09:24:30 crc kubenswrapper[4822]: I0224 09:24:30.055185 4822 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Feb 24 09:24:30 crc kubenswrapper[4822]: I0224 09:24:30.055208 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 09:24:30 crc kubenswrapper[4822]: I0224 09:24:30.055759 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 09:24:30 crc kubenswrapper[4822]: I0224 09:24:30.055860 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 09:24:30 crc kubenswrapper[4822]: I0224 09:24:30.056136 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 09:24:30 crc kubenswrapper[4822]: I0224 09:24:30.056244 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 09:24:30 crc kubenswrapper[4822]: I0224 09:24:30.056841 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 09:24:30 crc kubenswrapper[4822]: I0224 09:24:30.057637 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 09:24:30 crc kubenswrapper[4822]: I0224 09:24:30.071884 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnclj\" (UniqueName: \"kubernetes.io/projected/ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11-kube-api-access-hnclj\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 09:24:30 crc kubenswrapper[4822]: I0224 09:24:30.080896 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 09:24:30 crc kubenswrapper[4822]: I0224 09:24:30.232375 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 24 09:24:30 crc kubenswrapper[4822]: I0224 09:24:30.350901 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e353a9a9-56ef-4144-bad9-72de87e2fe57" path="/var/lib/kubelet/pods/e353a9a9-56ef-4144-bad9-72de87e2fe57/volumes" Feb 24 09:24:30 crc kubenswrapper[4822]: I0224 09:24:30.506644 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 24 09:24:30 crc kubenswrapper[4822]: I0224 09:24:30.590637 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 24 09:24:30 crc kubenswrapper[4822]: I0224 09:24:30.815421 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11","Type":"ContainerStarted","Data":"71369efa185fbf14d970e427833e2283ad07952b7fd81def087cc5b0244b6463"} Feb 24 09:24:30 crc kubenswrapper[4822]: I0224 09:24:30.817390 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f5d244a0-7be8-4ea4-b8aa-d4d461cb1146","Type":"ContainerStarted","Data":"4cd5c7df1c65a980aeb7de54044e6fded12322c941d61002cde0643262374e64"} Feb 24 09:24:31 crc kubenswrapper[4822]: I0224 09:24:31.254248 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 24 09:24:31 crc kubenswrapper[4822]: I0224 09:24:31.255402 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 24 09:24:31 crc kubenswrapper[4822]: I0224 09:24:31.258350 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 24 09:24:31 crc kubenswrapper[4822]: I0224 09:24:31.259718 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 24 09:24:31 crc kubenswrapper[4822]: I0224 09:24:31.259722 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-6ll94" Feb 24 09:24:31 crc kubenswrapper[4822]: I0224 09:24:31.264124 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 24 09:24:31 crc kubenswrapper[4822]: I0224 09:24:31.269041 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 24 09:24:31 crc kubenswrapper[4822]: I0224 09:24:31.300951 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 24 09:24:31 crc kubenswrapper[4822]: I0224 09:24:31.378817 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"92cdd045-d60c-433d-b2e3-32f93299ee8e\") " pod="openstack/openstack-galera-0" Feb 24 09:24:31 crc kubenswrapper[4822]: I0224 09:24:31.378874 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/92cdd045-d60c-433d-b2e3-32f93299ee8e-kolla-config\") pod \"openstack-galera-0\" (UID: \"92cdd045-d60c-433d-b2e3-32f93299ee8e\") " pod="openstack/openstack-galera-0" Feb 24 09:24:31 crc kubenswrapper[4822]: I0224 09:24:31.378905 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92cdd045-d60c-433d-b2e3-32f93299ee8e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"92cdd045-d60c-433d-b2e3-32f93299ee8e\") " pod="openstack/openstack-galera-0" Feb 24 09:24:31 crc kubenswrapper[4822]: I0224 09:24:31.378961 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/92cdd045-d60c-433d-b2e3-32f93299ee8e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"92cdd045-d60c-433d-b2e3-32f93299ee8e\") " pod="openstack/openstack-galera-0" Feb 24 09:24:31 crc kubenswrapper[4822]: I0224 09:24:31.378982 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/92cdd045-d60c-433d-b2e3-32f93299ee8e-config-data-default\") pod \"openstack-galera-0\" (UID: \"92cdd045-d60c-433d-b2e3-32f93299ee8e\") " pod="openstack/openstack-galera-0" Feb 24 09:24:31 crc kubenswrapper[4822]: I0224 09:24:31.379009 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/92cdd045-d60c-433d-b2e3-32f93299ee8e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"92cdd045-d60c-433d-b2e3-32f93299ee8e\") " pod="openstack/openstack-galera-0" Feb 24 09:24:31 crc kubenswrapper[4822]: I0224 09:24:31.379032 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92cdd045-d60c-433d-b2e3-32f93299ee8e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"92cdd045-d60c-433d-b2e3-32f93299ee8e\") " pod="openstack/openstack-galera-0" Feb 24 09:24:31 crc kubenswrapper[4822]: I0224 09:24:31.379063 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8knr\" (UniqueName: \"kubernetes.io/projected/92cdd045-d60c-433d-b2e3-32f93299ee8e-kube-api-access-n8knr\") pod \"openstack-galera-0\" (UID: \"92cdd045-d60c-433d-b2e3-32f93299ee8e\") " pod="openstack/openstack-galera-0" Feb 24 09:24:31 crc kubenswrapper[4822]: I0224 09:24:31.481489 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/92cdd045-d60c-433d-b2e3-32f93299ee8e-kolla-config\") pod \"openstack-galera-0\" (UID: \"92cdd045-d60c-433d-b2e3-32f93299ee8e\") " pod="openstack/openstack-galera-0" Feb 24 09:24:31 crc kubenswrapper[4822]: I0224 09:24:31.481565 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92cdd045-d60c-433d-b2e3-32f93299ee8e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"92cdd045-d60c-433d-b2e3-32f93299ee8e\") " pod="openstack/openstack-galera-0" Feb 24 09:24:31 crc kubenswrapper[4822]: I0224 09:24:31.481617 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/92cdd045-d60c-433d-b2e3-32f93299ee8e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"92cdd045-d60c-433d-b2e3-32f93299ee8e\") " pod="openstack/openstack-galera-0" Feb 24 09:24:31 crc kubenswrapper[4822]: I0224 09:24:31.481643 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/92cdd045-d60c-433d-b2e3-32f93299ee8e-config-data-default\") pod \"openstack-galera-0\" (UID: \"92cdd045-d60c-433d-b2e3-32f93299ee8e\") " pod="openstack/openstack-galera-0" Feb 24 09:24:31 crc kubenswrapper[4822]: I0224 09:24:31.481680 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/92cdd045-d60c-433d-b2e3-32f93299ee8e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"92cdd045-d60c-433d-b2e3-32f93299ee8e\") " pod="openstack/openstack-galera-0" Feb 24 09:24:31 crc kubenswrapper[4822]: I0224 09:24:31.481710 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92cdd045-d60c-433d-b2e3-32f93299ee8e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"92cdd045-d60c-433d-b2e3-32f93299ee8e\") " pod="openstack/openstack-galera-0" Feb 24 09:24:31 crc kubenswrapper[4822]: I0224 09:24:31.481744 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8knr\" (UniqueName: \"kubernetes.io/projected/92cdd045-d60c-433d-b2e3-32f93299ee8e-kube-api-access-n8knr\") pod \"openstack-galera-0\" (UID: \"92cdd045-d60c-433d-b2e3-32f93299ee8e\") " pod="openstack/openstack-galera-0" Feb 24 09:24:31 crc kubenswrapper[4822]: I0224 09:24:31.482541 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"92cdd045-d60c-433d-b2e3-32f93299ee8e\") " pod="openstack/openstack-galera-0" Feb 24 09:24:31 crc kubenswrapper[4822]: I0224 09:24:31.482853 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/92cdd045-d60c-433d-b2e3-32f93299ee8e-kolla-config\") pod \"openstack-galera-0\" (UID: \"92cdd045-d60c-433d-b2e3-32f93299ee8e\") " pod="openstack/openstack-galera-0" Feb 24 09:24:31 crc kubenswrapper[4822]: I0224 09:24:31.483185 4822 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"92cdd045-d60c-433d-b2e3-32f93299ee8e\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-galera-0" Feb 24 09:24:31 crc kubenswrapper[4822]: I0224 09:24:31.484721 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/92cdd045-d60c-433d-b2e3-32f93299ee8e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"92cdd045-d60c-433d-b2e3-32f93299ee8e\") " pod="openstack/openstack-galera-0" Feb 24 09:24:31 crc kubenswrapper[4822]: I0224 09:24:31.486253 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/92cdd045-d60c-433d-b2e3-32f93299ee8e-config-data-default\") pod \"openstack-galera-0\" (UID: \"92cdd045-d60c-433d-b2e3-32f93299ee8e\") " pod="openstack/openstack-galera-0" Feb 24 09:24:31 crc kubenswrapper[4822]: I0224 09:24:31.486860 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92cdd045-d60c-433d-b2e3-32f93299ee8e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"92cdd045-d60c-433d-b2e3-32f93299ee8e\") " pod="openstack/openstack-galera-0" Feb 24 09:24:31 crc kubenswrapper[4822]: I0224 09:24:31.490395 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92cdd045-d60c-433d-b2e3-32f93299ee8e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"92cdd045-d60c-433d-b2e3-32f93299ee8e\") " pod="openstack/openstack-galera-0" Feb 24 09:24:31 crc kubenswrapper[4822]: I0224 09:24:31.509420 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"92cdd045-d60c-433d-b2e3-32f93299ee8e\") " pod="openstack/openstack-galera-0" Feb 24 09:24:31 crc kubenswrapper[4822]: I0224 09:24:31.513986 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8knr\" (UniqueName: \"kubernetes.io/projected/92cdd045-d60c-433d-b2e3-32f93299ee8e-kube-api-access-n8knr\") pod \"openstack-galera-0\" (UID: \"92cdd045-d60c-433d-b2e3-32f93299ee8e\") " pod="openstack/openstack-galera-0" Feb 24 09:24:31 crc kubenswrapper[4822]: I0224 09:24:31.516622 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/92cdd045-d60c-433d-b2e3-32f93299ee8e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"92cdd045-d60c-433d-b2e3-32f93299ee8e\") " pod="openstack/openstack-galera-0" Feb 24 09:24:31 crc kubenswrapper[4822]: I0224 09:24:31.615523 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 24 09:24:32 crc kubenswrapper[4822]: I0224 09:24:32.724287 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 24 09:24:32 crc kubenswrapper[4822]: I0224 09:24:32.725681 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 24 09:24:32 crc kubenswrapper[4822]: I0224 09:24:32.727575 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-stpf9" Feb 24 09:24:32 crc kubenswrapper[4822]: I0224 09:24:32.728098 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 24 09:24:32 crc kubenswrapper[4822]: I0224 09:24:32.728236 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 24 09:24:32 crc kubenswrapper[4822]: I0224 09:24:32.729954 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 24 09:24:32 crc kubenswrapper[4822]: I0224 09:24:32.758491 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 24 09:24:32 crc kubenswrapper[4822]: I0224 09:24:32.817281 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"3ff049ae-9abb-4477-9f51-eee7228cedfd\") " pod="openstack/openstack-cell1-galera-0" Feb 24 09:24:32 crc kubenswrapper[4822]: I0224 09:24:32.817363 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3ff049ae-9abb-4477-9f51-eee7228cedfd-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"3ff049ae-9abb-4477-9f51-eee7228cedfd\") " pod="openstack/openstack-cell1-galera-0" Feb 24 09:24:32 crc kubenswrapper[4822]: I0224 09:24:32.817414 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ff049ae-9abb-4477-9f51-eee7228cedfd-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"3ff049ae-9abb-4477-9f51-eee7228cedfd\") " pod="openstack/openstack-cell1-galera-0" Feb 24 09:24:32 crc kubenswrapper[4822]: I0224 09:24:32.817435 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9mwd\" (UniqueName: \"kubernetes.io/projected/3ff049ae-9abb-4477-9f51-eee7228cedfd-kube-api-access-r9mwd\") pod \"openstack-cell1-galera-0\" (UID: \"3ff049ae-9abb-4477-9f51-eee7228cedfd\") " pod="openstack/openstack-cell1-galera-0" Feb 24 09:24:32 crc kubenswrapper[4822]: I0224 09:24:32.817857 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ff049ae-9abb-4477-9f51-eee7228cedfd-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"3ff049ae-9abb-4477-9f51-eee7228cedfd\") " pod="openstack/openstack-cell1-galera-0" Feb 24 09:24:32 crc kubenswrapper[4822]: I0224 09:24:32.818057 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ff049ae-9abb-4477-9f51-eee7228cedfd-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"3ff049ae-9abb-4477-9f51-eee7228cedfd\") " pod="openstack/openstack-cell1-galera-0" Feb 24 09:24:32 crc kubenswrapper[4822]: I0224 09:24:32.818128 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3ff049ae-9abb-4477-9f51-eee7228cedfd-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"3ff049ae-9abb-4477-9f51-eee7228cedfd\") " pod="openstack/openstack-cell1-galera-0" Feb 24 09:24:32 crc kubenswrapper[4822]: I0224 09:24:32.818215 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3ff049ae-9abb-4477-9f51-eee7228cedfd-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"3ff049ae-9abb-4477-9f51-eee7228cedfd\") " pod="openstack/openstack-cell1-galera-0" Feb 24 09:24:32 crc kubenswrapper[4822]: I0224 09:24:32.919621 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9mwd\" (UniqueName: \"kubernetes.io/projected/3ff049ae-9abb-4477-9f51-eee7228cedfd-kube-api-access-r9mwd\") pod \"openstack-cell1-galera-0\" (UID: \"3ff049ae-9abb-4477-9f51-eee7228cedfd\") " pod="openstack/openstack-cell1-galera-0" Feb 24 09:24:32 crc kubenswrapper[4822]: I0224 09:24:32.919660 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ff049ae-9abb-4477-9f51-eee7228cedfd-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"3ff049ae-9abb-4477-9f51-eee7228cedfd\") " pod="openstack/openstack-cell1-galera-0" Feb 24 09:24:32 crc kubenswrapper[4822]: I0224 09:24:32.919703 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ff049ae-9abb-4477-9f51-eee7228cedfd-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"3ff049ae-9abb-4477-9f51-eee7228cedfd\") " pod="openstack/openstack-cell1-galera-0" Feb 24 09:24:32 crc kubenswrapper[4822]: I0224 09:24:32.919748 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3ff049ae-9abb-4477-9f51-eee7228cedfd-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"3ff049ae-9abb-4477-9f51-eee7228cedfd\") " pod="openstack/openstack-cell1-galera-0" Feb 24 09:24:32 crc kubenswrapper[4822]: I0224 09:24:32.919798 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3ff049ae-9abb-4477-9f51-eee7228cedfd-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"3ff049ae-9abb-4477-9f51-eee7228cedfd\") " pod="openstack/openstack-cell1-galera-0" Feb 24 09:24:32 crc kubenswrapper[4822]: I0224 09:24:32.919835 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"3ff049ae-9abb-4477-9f51-eee7228cedfd\") " pod="openstack/openstack-cell1-galera-0" Feb 24 09:24:32 crc kubenswrapper[4822]: I0224 09:24:32.919894 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3ff049ae-9abb-4477-9f51-eee7228cedfd-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"3ff049ae-9abb-4477-9f51-eee7228cedfd\") " pod="openstack/openstack-cell1-galera-0" Feb 24 09:24:32 crc kubenswrapper[4822]: I0224 09:24:32.919929 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ff049ae-9abb-4477-9f51-eee7228cedfd-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"3ff049ae-9abb-4477-9f51-eee7228cedfd\") " pod="openstack/openstack-cell1-galera-0" Feb 24 09:24:32 crc kubenswrapper[4822]: I0224 09:24:32.920878 4822 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"3ff049ae-9abb-4477-9f51-eee7228cedfd\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-cell1-galera-0" Feb 24 09:24:32 crc kubenswrapper[4822]: I0224 09:24:32.921018 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3ff049ae-9abb-4477-9f51-eee7228cedfd-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"3ff049ae-9abb-4477-9f51-eee7228cedfd\") " pod="openstack/openstack-cell1-galera-0" Feb 24 09:24:32 crc kubenswrapper[4822]: I0224 09:24:32.921827 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3ff049ae-9abb-4477-9f51-eee7228cedfd-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"3ff049ae-9abb-4477-9f51-eee7228cedfd\") " pod="openstack/openstack-cell1-galera-0" Feb 24 09:24:32 crc kubenswrapper[4822]: I0224 09:24:32.922411 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ff049ae-9abb-4477-9f51-eee7228cedfd-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"3ff049ae-9abb-4477-9f51-eee7228cedfd\") " pod="openstack/openstack-cell1-galera-0" Feb 24 09:24:32 crc kubenswrapper[4822]: I0224 09:24:32.922814 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3ff049ae-9abb-4477-9f51-eee7228cedfd-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"3ff049ae-9abb-4477-9f51-eee7228cedfd\") " pod="openstack/openstack-cell1-galera-0" Feb 24 09:24:32 crc kubenswrapper[4822]: I0224 09:24:32.925699 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ff049ae-9abb-4477-9f51-eee7228cedfd-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"3ff049ae-9abb-4477-9f51-eee7228cedfd\") " pod="openstack/openstack-cell1-galera-0" Feb 24 09:24:32 crc kubenswrapper[4822]: I0224 09:24:32.925753 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ff049ae-9abb-4477-9f51-eee7228cedfd-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"3ff049ae-9abb-4477-9f51-eee7228cedfd\") " pod="openstack/openstack-cell1-galera-0" Feb 24 09:24:32 crc kubenswrapper[4822]: I0224 09:24:32.939090 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9mwd\" (UniqueName: \"kubernetes.io/projected/3ff049ae-9abb-4477-9f51-eee7228cedfd-kube-api-access-r9mwd\") pod \"openstack-cell1-galera-0\" (UID: \"3ff049ae-9abb-4477-9f51-eee7228cedfd\") " pod="openstack/openstack-cell1-galera-0" Feb 24 09:24:32 crc kubenswrapper[4822]: I0224 09:24:32.944210 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"3ff049ae-9abb-4477-9f51-eee7228cedfd\") " pod="openstack/openstack-cell1-galera-0" Feb 24 09:24:33 crc kubenswrapper[4822]: I0224 09:24:33.014244 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 24 09:24:33 crc kubenswrapper[4822]: I0224 09:24:33.015262 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 24 09:24:33 crc kubenswrapper[4822]: I0224 09:24:33.021233 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 24 09:24:33 crc kubenswrapper[4822]: I0224 09:24:33.021284 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 24 09:24:33 crc kubenswrapper[4822]: I0224 09:24:33.021503 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-cknkc" Feb 24 09:24:33 crc kubenswrapper[4822]: I0224 09:24:33.027650 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 24 09:24:33 crc kubenswrapper[4822]: I0224 09:24:33.056837 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 24 09:24:33 crc kubenswrapper[4822]: I0224 09:24:33.122026 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsq5p\" (UniqueName: \"kubernetes.io/projected/ebd1051f-ff62-47e1-ae8f-0343453e1544-kube-api-access-zsq5p\") pod \"memcached-0\" (UID: \"ebd1051f-ff62-47e1-ae8f-0343453e1544\") " pod="openstack/memcached-0" Feb 24 09:24:33 crc kubenswrapper[4822]: I0224 09:24:33.122063 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebd1051f-ff62-47e1-ae8f-0343453e1544-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ebd1051f-ff62-47e1-ae8f-0343453e1544\") " pod="openstack/memcached-0" Feb 24 09:24:33 crc kubenswrapper[4822]: I0224 09:24:33.122109 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebd1051f-ff62-47e1-ae8f-0343453e1544-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ebd1051f-ff62-47e1-ae8f-0343453e1544\") " pod="openstack/memcached-0" Feb 24 09:24:33 crc kubenswrapper[4822]: I0224 09:24:33.122134 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ebd1051f-ff62-47e1-ae8f-0343453e1544-kolla-config\") pod \"memcached-0\" (UID: \"ebd1051f-ff62-47e1-ae8f-0343453e1544\") " pod="openstack/memcached-0" Feb 24 09:24:33 crc kubenswrapper[4822]: I0224 09:24:33.122460 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ebd1051f-ff62-47e1-ae8f-0343453e1544-config-data\") pod \"memcached-0\" (UID: \"ebd1051f-ff62-47e1-ae8f-0343453e1544\") " pod="openstack/memcached-0" Feb 24 09:24:33 crc kubenswrapper[4822]: I0224 09:24:33.227838 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebd1051f-ff62-47e1-ae8f-0343453e1544-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ebd1051f-ff62-47e1-ae8f-0343453e1544\") " pod="openstack/memcached-0" Feb 24 09:24:33 crc kubenswrapper[4822]: I0224 09:24:33.227891 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ebd1051f-ff62-47e1-ae8f-0343453e1544-kolla-config\") pod \"memcached-0\" (UID: \"ebd1051f-ff62-47e1-ae8f-0343453e1544\") " pod="openstack/memcached-0" Feb 24 09:24:33 crc kubenswrapper[4822]: I0224 09:24:33.227978 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ebd1051f-ff62-47e1-ae8f-0343453e1544-config-data\") pod \"memcached-0\" (UID: \"ebd1051f-ff62-47e1-ae8f-0343453e1544\") " pod="openstack/memcached-0" Feb 24 09:24:33 crc kubenswrapper[4822]: I0224 09:24:33.228012 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsq5p\" (UniqueName: \"kubernetes.io/projected/ebd1051f-ff62-47e1-ae8f-0343453e1544-kube-api-access-zsq5p\") pod \"memcached-0\" (UID: \"ebd1051f-ff62-47e1-ae8f-0343453e1544\") " pod="openstack/memcached-0" Feb 24 09:24:33 crc kubenswrapper[4822]: I0224 09:24:33.228033 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebd1051f-ff62-47e1-ae8f-0343453e1544-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ebd1051f-ff62-47e1-ae8f-0343453e1544\") " pod="openstack/memcached-0" Feb 24 09:24:33 crc kubenswrapper[4822]: I0224 09:24:33.228795 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ebd1051f-ff62-47e1-ae8f-0343453e1544-config-data\") pod \"memcached-0\" (UID: \"ebd1051f-ff62-47e1-ae8f-0343453e1544\") " pod="openstack/memcached-0" Feb 24 09:24:33 crc kubenswrapper[4822]: I0224 09:24:33.228664 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ebd1051f-ff62-47e1-ae8f-0343453e1544-kolla-config\") pod \"memcached-0\" (UID: \"ebd1051f-ff62-47e1-ae8f-0343453e1544\") " pod="openstack/memcached-0" Feb 24 09:24:33 crc kubenswrapper[4822]: I0224 09:24:33.232627 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebd1051f-ff62-47e1-ae8f-0343453e1544-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ebd1051f-ff62-47e1-ae8f-0343453e1544\") " pod="openstack/memcached-0" Feb 24 09:24:33 crc kubenswrapper[4822]: I0224 09:24:33.236568 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebd1051f-ff62-47e1-ae8f-0343453e1544-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ebd1051f-ff62-47e1-ae8f-0343453e1544\") " pod="openstack/memcached-0" Feb 24 09:24:33 crc kubenswrapper[4822]: I0224 09:24:33.258497 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsq5p\" (UniqueName: \"kubernetes.io/projected/ebd1051f-ff62-47e1-ae8f-0343453e1544-kube-api-access-zsq5p\") pod \"memcached-0\" (UID: \"ebd1051f-ff62-47e1-ae8f-0343453e1544\") " pod="openstack/memcached-0" Feb 24 09:24:33 crc kubenswrapper[4822]: I0224 09:24:33.340375 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 24 09:24:35 crc kubenswrapper[4822]: I0224 09:24:35.136816 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 24 09:24:35 crc kubenswrapper[4822]: I0224 09:24:35.138576 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 24 09:24:35 crc kubenswrapper[4822]: I0224 09:24:35.143289 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-fsd8h" Feb 24 09:24:35 crc kubenswrapper[4822]: I0224 09:24:35.155637 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 24 09:24:35 crc kubenswrapper[4822]: I0224 09:24:35.284631 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbtkn\" (UniqueName: \"kubernetes.io/projected/42ee9718-560e-43c6-ab6d-ac104bb72e56-kube-api-access-mbtkn\") pod \"kube-state-metrics-0\" (UID: \"42ee9718-560e-43c6-ab6d-ac104bb72e56\") " pod="openstack/kube-state-metrics-0" Feb 24 09:24:35 crc kubenswrapper[4822]: I0224 09:24:35.385787 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbtkn\" (UniqueName: \"kubernetes.io/projected/42ee9718-560e-43c6-ab6d-ac104bb72e56-kube-api-access-mbtkn\") pod \"kube-state-metrics-0\" (UID: \"42ee9718-560e-43c6-ab6d-ac104bb72e56\") " pod="openstack/kube-state-metrics-0" Feb 24 09:24:35 crc kubenswrapper[4822]: I0224 09:24:35.410681 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbtkn\" (UniqueName: \"kubernetes.io/projected/42ee9718-560e-43c6-ab6d-ac104bb72e56-kube-api-access-mbtkn\") pod \"kube-state-metrics-0\" (UID: \"42ee9718-560e-43c6-ab6d-ac104bb72e56\") " pod="openstack/kube-state-metrics-0" Feb 24 09:24:35 crc kubenswrapper[4822]: I0224 09:24:35.470017 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.582137 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-kp7bj"] Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.583586 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kp7bj" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.589269 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.589462 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-fjkph" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.589640 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.592674 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-tmwwp"] Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.594610 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-tmwwp" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.605666 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kp7bj"] Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.629024 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-tmwwp"] Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.746979 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbfl9\" (UniqueName: \"kubernetes.io/projected/33ac8d7e-339a-4b57-8d8d-38393ee4f9ce-kube-api-access-zbfl9\") pod \"ovn-controller-kp7bj\" (UID: \"33ac8d7e-339a-4b57-8d8d-38393ee4f9ce\") " pod="openstack/ovn-controller-kp7bj" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.747111 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/33ac8d7e-339a-4b57-8d8d-38393ee4f9ce-ovn-controller-tls-certs\") pod \"ovn-controller-kp7bj\" (UID: \"33ac8d7e-339a-4b57-8d8d-38393ee4f9ce\") " pod="openstack/ovn-controller-kp7bj" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.747160 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/33ac8d7e-339a-4b57-8d8d-38393ee4f9ce-var-run-ovn\") pod \"ovn-controller-kp7bj\" (UID: \"33ac8d7e-339a-4b57-8d8d-38393ee4f9ce\") " pod="openstack/ovn-controller-kp7bj" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.747209 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e2c3821d-0ccc-464b-93c1-6966504fb2c9-scripts\") pod \"ovn-controller-ovs-tmwwp\" (UID: \"e2c3821d-0ccc-464b-93c1-6966504fb2c9\") " pod="openstack/ovn-controller-ovs-tmwwp" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.747251 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/33ac8d7e-339a-4b57-8d8d-38393ee4f9ce-var-log-ovn\") pod \"ovn-controller-kp7bj\" (UID: \"33ac8d7e-339a-4b57-8d8d-38393ee4f9ce\") " pod="openstack/ovn-controller-kp7bj" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.747287 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/33ac8d7e-339a-4b57-8d8d-38393ee4f9ce-var-run\") pod \"ovn-controller-kp7bj\" (UID: \"33ac8d7e-339a-4b57-8d8d-38393ee4f9ce\") " pod="openstack/ovn-controller-kp7bj" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.747323 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/e2c3821d-0ccc-464b-93c1-6966504fb2c9-var-log\") pod \"ovn-controller-ovs-tmwwp\" (UID: \"e2c3821d-0ccc-464b-93c1-6966504fb2c9\") " pod="openstack/ovn-controller-ovs-tmwwp" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.747371 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8bsg\" (UniqueName: \"kubernetes.io/projected/e2c3821d-0ccc-464b-93c1-6966504fb2c9-kube-api-access-s8bsg\") pod \"ovn-controller-ovs-tmwwp\" (UID: \"e2c3821d-0ccc-464b-93c1-6966504fb2c9\") " pod="openstack/ovn-controller-ovs-tmwwp" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.747410 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e2c3821d-0ccc-464b-93c1-6966504fb2c9-var-run\") pod \"ovn-controller-ovs-tmwwp\" (UID: \"e2c3821d-0ccc-464b-93c1-6966504fb2c9\") " pod="openstack/ovn-controller-ovs-tmwwp" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.747431 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/33ac8d7e-339a-4b57-8d8d-38393ee4f9ce-scripts\") pod \"ovn-controller-kp7bj\" (UID: \"33ac8d7e-339a-4b57-8d8d-38393ee4f9ce\") " pod="openstack/ovn-controller-kp7bj" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.747472 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/e2c3821d-0ccc-464b-93c1-6966504fb2c9-etc-ovs\") pod \"ovn-controller-ovs-tmwwp\" (UID: \"e2c3821d-0ccc-464b-93c1-6966504fb2c9\") " pod="openstack/ovn-controller-ovs-tmwwp" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.747505 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/e2c3821d-0ccc-464b-93c1-6966504fb2c9-var-lib\") pod \"ovn-controller-ovs-tmwwp\" (UID: \"e2c3821d-0ccc-464b-93c1-6966504fb2c9\") " pod="openstack/ovn-controller-ovs-tmwwp" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.747535 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ac8d7e-339a-4b57-8d8d-38393ee4f9ce-combined-ca-bundle\") pod \"ovn-controller-kp7bj\" (UID: \"33ac8d7e-339a-4b57-8d8d-38393ee4f9ce\") " pod="openstack/ovn-controller-kp7bj" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.849496 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbfl9\" (UniqueName: \"kubernetes.io/projected/33ac8d7e-339a-4b57-8d8d-38393ee4f9ce-kube-api-access-zbfl9\") pod \"ovn-controller-kp7bj\" (UID: \"33ac8d7e-339a-4b57-8d8d-38393ee4f9ce\") " pod="openstack/ovn-controller-kp7bj" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.849547 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/33ac8d7e-339a-4b57-8d8d-38393ee4f9ce-ovn-controller-tls-certs\") pod \"ovn-controller-kp7bj\" (UID: \"33ac8d7e-339a-4b57-8d8d-38393ee4f9ce\") " pod="openstack/ovn-controller-kp7bj" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.849569 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/33ac8d7e-339a-4b57-8d8d-38393ee4f9ce-var-run-ovn\") pod \"ovn-controller-kp7bj\" (UID: \"33ac8d7e-339a-4b57-8d8d-38393ee4f9ce\") " pod="openstack/ovn-controller-kp7bj" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.849593 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e2c3821d-0ccc-464b-93c1-6966504fb2c9-scripts\") pod \"ovn-controller-ovs-tmwwp\" (UID: \"e2c3821d-0ccc-464b-93c1-6966504fb2c9\") " pod="openstack/ovn-controller-ovs-tmwwp" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.849615 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/33ac8d7e-339a-4b57-8d8d-38393ee4f9ce-var-log-ovn\") pod \"ovn-controller-kp7bj\" (UID: \"33ac8d7e-339a-4b57-8d8d-38393ee4f9ce\") " pod="openstack/ovn-controller-kp7bj" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.849634 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/33ac8d7e-339a-4b57-8d8d-38393ee4f9ce-var-run\") pod \"ovn-controller-kp7bj\" (UID: \"33ac8d7e-339a-4b57-8d8d-38393ee4f9ce\") " pod="openstack/ovn-controller-kp7bj" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.849655 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/e2c3821d-0ccc-464b-93c1-6966504fb2c9-var-log\") pod \"ovn-controller-ovs-tmwwp\" (UID: \"e2c3821d-0ccc-464b-93c1-6966504fb2c9\") " pod="openstack/ovn-controller-ovs-tmwwp" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.849684 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8bsg\" (UniqueName: \"kubernetes.io/projected/e2c3821d-0ccc-464b-93c1-6966504fb2c9-kube-api-access-s8bsg\") pod \"ovn-controller-ovs-tmwwp\" (UID: \"e2c3821d-0ccc-464b-93c1-6966504fb2c9\") " pod="openstack/ovn-controller-ovs-tmwwp" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.849710 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e2c3821d-0ccc-464b-93c1-6966504fb2c9-var-run\") pod \"ovn-controller-ovs-tmwwp\" (UID: \"e2c3821d-0ccc-464b-93c1-6966504fb2c9\") " pod="openstack/ovn-controller-ovs-tmwwp" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.849725 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/33ac8d7e-339a-4b57-8d8d-38393ee4f9ce-scripts\") pod \"ovn-controller-kp7bj\" (UID: \"33ac8d7e-339a-4b57-8d8d-38393ee4f9ce\") " pod="openstack/ovn-controller-kp7bj" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.849747 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/e2c3821d-0ccc-464b-93c1-6966504fb2c9-etc-ovs\") pod \"ovn-controller-ovs-tmwwp\" (UID: \"e2c3821d-0ccc-464b-93c1-6966504fb2c9\") " pod="openstack/ovn-controller-ovs-tmwwp" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.849768 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/e2c3821d-0ccc-464b-93c1-6966504fb2c9-var-lib\") pod \"ovn-controller-ovs-tmwwp\" (UID: \"e2c3821d-0ccc-464b-93c1-6966504fb2c9\") " pod="openstack/ovn-controller-ovs-tmwwp" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.849785 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ac8d7e-339a-4b57-8d8d-38393ee4f9ce-combined-ca-bundle\") pod \"ovn-controller-kp7bj\" (UID: \"33ac8d7e-339a-4b57-8d8d-38393ee4f9ce\") " pod="openstack/ovn-controller-kp7bj" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.850539 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/33ac8d7e-339a-4b57-8d8d-38393ee4f9ce-var-run-ovn\") pod \"ovn-controller-kp7bj\" (UID: \"33ac8d7e-339a-4b57-8d8d-38393ee4f9ce\") " pod="openstack/ovn-controller-kp7bj" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.850736 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/33ac8d7e-339a-4b57-8d8d-38393ee4f9ce-var-run\") pod \"ovn-controller-kp7bj\" (UID: \"33ac8d7e-339a-4b57-8d8d-38393ee4f9ce\") " pod="openstack/ovn-controller-kp7bj" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.850947 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/33ac8d7e-339a-4b57-8d8d-38393ee4f9ce-var-log-ovn\") pod \"ovn-controller-kp7bj\" (UID: \"33ac8d7e-339a-4b57-8d8d-38393ee4f9ce\") " pod="openstack/ovn-controller-kp7bj" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.851808 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/e2c3821d-0ccc-464b-93c1-6966504fb2c9-etc-ovs\") pod \"ovn-controller-ovs-tmwwp\" (UID: \"e2c3821d-0ccc-464b-93c1-6966504fb2c9\") " pod="openstack/ovn-controller-ovs-tmwwp" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.852050 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/e2c3821d-0ccc-464b-93c1-6966504fb2c9-var-log\") pod \"ovn-controller-ovs-tmwwp\" (UID: \"e2c3821d-0ccc-464b-93c1-6966504fb2c9\") " pod="openstack/ovn-controller-ovs-tmwwp" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.852299 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e2c3821d-0ccc-464b-93c1-6966504fb2c9-var-run\") pod \"ovn-controller-ovs-tmwwp\" (UID: \"e2c3821d-0ccc-464b-93c1-6966504fb2c9\") " pod="openstack/ovn-controller-ovs-tmwwp" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.852508 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/e2c3821d-0ccc-464b-93c1-6966504fb2c9-var-lib\") pod \"ovn-controller-ovs-tmwwp\" (UID: \"e2c3821d-0ccc-464b-93c1-6966504fb2c9\") " pod="openstack/ovn-controller-ovs-tmwwp" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.852617 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e2c3821d-0ccc-464b-93c1-6966504fb2c9-scripts\") pod \"ovn-controller-ovs-tmwwp\" (UID: \"e2c3821d-0ccc-464b-93c1-6966504fb2c9\") " pod="openstack/ovn-controller-ovs-tmwwp" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.854153 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/33ac8d7e-339a-4b57-8d8d-38393ee4f9ce-scripts\") pod \"ovn-controller-kp7bj\" (UID: \"33ac8d7e-339a-4b57-8d8d-38393ee4f9ce\") " pod="openstack/ovn-controller-kp7bj" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.855142 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/33ac8d7e-339a-4b57-8d8d-38393ee4f9ce-ovn-controller-tls-certs\") pod \"ovn-controller-kp7bj\" (UID: \"33ac8d7e-339a-4b57-8d8d-38393ee4f9ce\") " pod="openstack/ovn-controller-kp7bj" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.856668 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ac8d7e-339a-4b57-8d8d-38393ee4f9ce-combined-ca-bundle\") pod \"ovn-controller-kp7bj\" (UID: \"33ac8d7e-339a-4b57-8d8d-38393ee4f9ce\") " pod="openstack/ovn-controller-kp7bj" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.869925 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbfl9\" (UniqueName: \"kubernetes.io/projected/33ac8d7e-339a-4b57-8d8d-38393ee4f9ce-kube-api-access-zbfl9\") pod \"ovn-controller-kp7bj\" (UID: \"33ac8d7e-339a-4b57-8d8d-38393ee4f9ce\") " pod="openstack/ovn-controller-kp7bj" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.875939 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8bsg\" (UniqueName: \"kubernetes.io/projected/e2c3821d-0ccc-464b-93c1-6966504fb2c9-kube-api-access-s8bsg\") pod \"ovn-controller-ovs-tmwwp\" (UID: \"e2c3821d-0ccc-464b-93c1-6966504fb2c9\") " pod="openstack/ovn-controller-ovs-tmwwp" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.909568 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kp7bj" Feb 24 09:24:38 crc kubenswrapper[4822]: I0224 09:24:38.911727 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-tmwwp" Feb 24 09:24:41 crc kubenswrapper[4822]: I0224 09:24:41.862562 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 24 09:24:41 crc kubenswrapper[4822]: I0224 09:24:41.864139 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 24 09:24:41 crc kubenswrapper[4822]: I0224 09:24:41.865765 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 24 09:24:41 crc kubenswrapper[4822]: I0224 09:24:41.866406 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-j95pp" Feb 24 09:24:41 crc kubenswrapper[4822]: I0224 09:24:41.866728 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 24 09:24:41 crc kubenswrapper[4822]: I0224 09:24:41.866844 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 24 09:24:41 crc kubenswrapper[4822]: I0224 09:24:41.866980 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 24 09:24:41 crc kubenswrapper[4822]: I0224 09:24:41.879814 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 24 09:24:41 crc kubenswrapper[4822]: I0224 09:24:41.995956 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/531a3082-e3fe-40f7-b7ca-63b78cbb3fcd-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"531a3082-e3fe-40f7-b7ca-63b78cbb3fcd\") " pod="openstack/ovsdbserver-nb-0" Feb 24 09:24:41 crc kubenswrapper[4822]: I0224 09:24:41.996007 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/531a3082-e3fe-40f7-b7ca-63b78cbb3fcd-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"531a3082-e3fe-40f7-b7ca-63b78cbb3fcd\") " pod="openstack/ovsdbserver-nb-0" Feb 24 09:24:41 crc kubenswrapper[4822]: I0224 09:24:41.996027 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/531a3082-e3fe-40f7-b7ca-63b78cbb3fcd-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"531a3082-e3fe-40f7-b7ca-63b78cbb3fcd\") " pod="openstack/ovsdbserver-nb-0" Feb 24 09:24:41 crc kubenswrapper[4822]: I0224 09:24:41.996056 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/531a3082-e3fe-40f7-b7ca-63b78cbb3fcd-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"531a3082-e3fe-40f7-b7ca-63b78cbb3fcd\") " pod="openstack/ovsdbserver-nb-0" Feb 24 09:24:41 crc kubenswrapper[4822]: I0224 09:24:41.996096 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2slxm\" (UniqueName: \"kubernetes.io/projected/531a3082-e3fe-40f7-b7ca-63b78cbb3fcd-kube-api-access-2slxm\") pod \"ovsdbserver-nb-0\" (UID: \"531a3082-e3fe-40f7-b7ca-63b78cbb3fcd\") " pod="openstack/ovsdbserver-nb-0" Feb 24 09:24:41 crc kubenswrapper[4822]: I0224 09:24:41.996230 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"531a3082-e3fe-40f7-b7ca-63b78cbb3fcd\") " pod="openstack/ovsdbserver-nb-0" Feb 24 09:24:41 crc kubenswrapper[4822]: I0224 09:24:41.996269 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/531a3082-e3fe-40f7-b7ca-63b78cbb3fcd-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"531a3082-e3fe-40f7-b7ca-63b78cbb3fcd\") " pod="openstack/ovsdbserver-nb-0" Feb 24 09:24:41 crc kubenswrapper[4822]: I0224 09:24:41.996444 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/531a3082-e3fe-40f7-b7ca-63b78cbb3fcd-config\") pod \"ovsdbserver-nb-0\" (UID: \"531a3082-e3fe-40f7-b7ca-63b78cbb3fcd\") " pod="openstack/ovsdbserver-nb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.096171 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.097432 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.098003 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/531a3082-e3fe-40f7-b7ca-63b78cbb3fcd-config\") pod \"ovsdbserver-nb-0\" (UID: \"531a3082-e3fe-40f7-b7ca-63b78cbb3fcd\") " pod="openstack/ovsdbserver-nb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.098056 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/531a3082-e3fe-40f7-b7ca-63b78cbb3fcd-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"531a3082-e3fe-40f7-b7ca-63b78cbb3fcd\") " pod="openstack/ovsdbserver-nb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.098081 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/531a3082-e3fe-40f7-b7ca-63b78cbb3fcd-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"531a3082-e3fe-40f7-b7ca-63b78cbb3fcd\") " pod="openstack/ovsdbserver-nb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.098100 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/531a3082-e3fe-40f7-b7ca-63b78cbb3fcd-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"531a3082-e3fe-40f7-b7ca-63b78cbb3fcd\") " pod="openstack/ovsdbserver-nb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.098128 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/531a3082-e3fe-40f7-b7ca-63b78cbb3fcd-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"531a3082-e3fe-40f7-b7ca-63b78cbb3fcd\") " pod="openstack/ovsdbserver-nb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.098167 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2slxm\" (UniqueName: \"kubernetes.io/projected/531a3082-e3fe-40f7-b7ca-63b78cbb3fcd-kube-api-access-2slxm\") pod \"ovsdbserver-nb-0\" (UID: \"531a3082-e3fe-40f7-b7ca-63b78cbb3fcd\") " pod="openstack/ovsdbserver-nb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.098196 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"531a3082-e3fe-40f7-b7ca-63b78cbb3fcd\") " pod="openstack/ovsdbserver-nb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.098213 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/531a3082-e3fe-40f7-b7ca-63b78cbb3fcd-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"531a3082-e3fe-40f7-b7ca-63b78cbb3fcd\") " pod="openstack/ovsdbserver-nb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.099035 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.099095 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/531a3082-e3fe-40f7-b7ca-63b78cbb3fcd-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"531a3082-e3fe-40f7-b7ca-63b78cbb3fcd\") " pod="openstack/ovsdbserver-nb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.099265 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/531a3082-e3fe-40f7-b7ca-63b78cbb3fcd-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"531a3082-e3fe-40f7-b7ca-63b78cbb3fcd\") " pod="openstack/ovsdbserver-nb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.100068 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.100317 4822 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"531a3082-e3fe-40f7-b7ca-63b78cbb3fcd\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-nb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.102722 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/531a3082-e3fe-40f7-b7ca-63b78cbb3fcd-config\") pod \"ovsdbserver-nb-0\" (UID: \"531a3082-e3fe-40f7-b7ca-63b78cbb3fcd\") " pod="openstack/ovsdbserver-nb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.103490 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.103552 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/531a3082-e3fe-40f7-b7ca-63b78cbb3fcd-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"531a3082-e3fe-40f7-b7ca-63b78cbb3fcd\") " pod="openstack/ovsdbserver-nb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.121694 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.142380 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/531a3082-e3fe-40f7-b7ca-63b78cbb3fcd-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"531a3082-e3fe-40f7-b7ca-63b78cbb3fcd\") " pod="openstack/ovsdbserver-nb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.154133 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/531a3082-e3fe-40f7-b7ca-63b78cbb3fcd-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"531a3082-e3fe-40f7-b7ca-63b78cbb3fcd\") " pod="openstack/ovsdbserver-nb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.155172 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-sxn47" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.160336 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2slxm\" (UniqueName: \"kubernetes.io/projected/531a3082-e3fe-40f7-b7ca-63b78cbb3fcd-kube-api-access-2slxm\") pod \"ovsdbserver-nb-0\" (UID: \"531a3082-e3fe-40f7-b7ca-63b78cbb3fcd\") " pod="openstack/ovsdbserver-nb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.185347 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"531a3082-e3fe-40f7-b7ca-63b78cbb3fcd\") " pod="openstack/ovsdbserver-nb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.190447 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.199798 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ce6c3961-b502-497c-9e1f-060d93d96768\") " pod="openstack/ovsdbserver-sb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.200256 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce6c3961-b502-497c-9e1f-060d93d96768-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ce6c3961-b502-497c-9e1f-060d93d96768\") " pod="openstack/ovsdbserver-sb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.200302 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce6c3961-b502-497c-9e1f-060d93d96768-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ce6c3961-b502-497c-9e1f-060d93d96768\") " pod="openstack/ovsdbserver-sb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.200344 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce6c3961-b502-497c-9e1f-060d93d96768-config\") pod \"ovsdbserver-sb-0\" (UID: \"ce6c3961-b502-497c-9e1f-060d93d96768\") " pod="openstack/ovsdbserver-sb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.200374 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ce6c3961-b502-497c-9e1f-060d93d96768-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ce6c3961-b502-497c-9e1f-060d93d96768\") " pod="openstack/ovsdbserver-sb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.200398 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ce6c3961-b502-497c-9e1f-060d93d96768-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ce6c3961-b502-497c-9e1f-060d93d96768\") " pod="openstack/ovsdbserver-sb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.200422 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddx4s\" (UniqueName: \"kubernetes.io/projected/ce6c3961-b502-497c-9e1f-060d93d96768-kube-api-access-ddx4s\") pod \"ovsdbserver-sb-0\" (UID: \"ce6c3961-b502-497c-9e1f-060d93d96768\") " pod="openstack/ovsdbserver-sb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.200459 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce6c3961-b502-497c-9e1f-060d93d96768-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ce6c3961-b502-497c-9e1f-060d93d96768\") " pod="openstack/ovsdbserver-sb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.302067 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce6c3961-b502-497c-9e1f-060d93d96768-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ce6c3961-b502-497c-9e1f-060d93d96768\") " pod="openstack/ovsdbserver-sb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.302127 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce6c3961-b502-497c-9e1f-060d93d96768-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ce6c3961-b502-497c-9e1f-060d93d96768\") " pod="openstack/ovsdbserver-sb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.302160 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce6c3961-b502-497c-9e1f-060d93d96768-config\") pod \"ovsdbserver-sb-0\" (UID: \"ce6c3961-b502-497c-9e1f-060d93d96768\") " pod="openstack/ovsdbserver-sb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.302181 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ce6c3961-b502-497c-9e1f-060d93d96768-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ce6c3961-b502-497c-9e1f-060d93d96768\") " pod="openstack/ovsdbserver-sb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.302283 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ce6c3961-b502-497c-9e1f-060d93d96768-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ce6c3961-b502-497c-9e1f-060d93d96768\") " pod="openstack/ovsdbserver-sb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.302315 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddx4s\" (UniqueName: \"kubernetes.io/projected/ce6c3961-b502-497c-9e1f-060d93d96768-kube-api-access-ddx4s\") pod \"ovsdbserver-sb-0\" (UID: \"ce6c3961-b502-497c-9e1f-060d93d96768\") " pod="openstack/ovsdbserver-sb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.302344 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce6c3961-b502-497c-9e1f-060d93d96768-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ce6c3961-b502-497c-9e1f-060d93d96768\") " pod="openstack/ovsdbserver-sb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.302386 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ce6c3961-b502-497c-9e1f-060d93d96768\") " pod="openstack/ovsdbserver-sb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.302638 4822 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ce6c3961-b502-497c-9e1f-060d93d96768\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-sb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.304072 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ce6c3961-b502-497c-9e1f-060d93d96768-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ce6c3961-b502-497c-9e1f-060d93d96768\") " pod="openstack/ovsdbserver-sb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.304503 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ce6c3961-b502-497c-9e1f-060d93d96768-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ce6c3961-b502-497c-9e1f-060d93d96768\") " pod="openstack/ovsdbserver-sb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.306363 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce6c3961-b502-497c-9e1f-060d93d96768-config\") pod \"ovsdbserver-sb-0\" (UID: \"ce6c3961-b502-497c-9e1f-060d93d96768\") " pod="openstack/ovsdbserver-sb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.309411 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce6c3961-b502-497c-9e1f-060d93d96768-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ce6c3961-b502-497c-9e1f-060d93d96768\") " pod="openstack/ovsdbserver-sb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.311047 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce6c3961-b502-497c-9e1f-060d93d96768-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ce6c3961-b502-497c-9e1f-060d93d96768\") " pod="openstack/ovsdbserver-sb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.311597 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce6c3961-b502-497c-9e1f-060d93d96768-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ce6c3961-b502-497c-9e1f-060d93d96768\") " pod="openstack/ovsdbserver-sb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.321205 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ce6c3961-b502-497c-9e1f-060d93d96768\") " pod="openstack/ovsdbserver-sb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.342797 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddx4s\" (UniqueName: \"kubernetes.io/projected/ce6c3961-b502-497c-9e1f-060d93d96768-kube-api-access-ddx4s\") pod \"ovsdbserver-sb-0\" (UID: \"ce6c3961-b502-497c-9e1f-060d93d96768\") " pod="openstack/ovsdbserver-sb-0" Feb 24 09:24:42 crc kubenswrapper[4822]: I0224 09:24:42.536060 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 24 09:24:44 crc kubenswrapper[4822]: E0224 09:24:44.596900 4822 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 24 09:24:44 crc kubenswrapper[4822]: E0224 09:24:44.597465 4822 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xhj6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-h2nqd_openstack(2aa41337-70e9-4de2-a201-35da03417af7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 09:24:44 crc kubenswrapper[4822]: E0224 09:24:44.598855 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-h2nqd" podUID="2aa41337-70e9-4de2-a201-35da03417af7" Feb 24 09:24:45 crc kubenswrapper[4822]: I0224 09:24:45.676270 4822 patch_prober.go:28] interesting pod/machine-config-daemon-qd752 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:24:45 crc kubenswrapper[4822]: I0224 09:24:45.676544 4822 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:24:45 crc kubenswrapper[4822]: I0224 09:24:45.677236 4822 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd752" Feb 24 09:24:45 crc kubenswrapper[4822]: I0224 09:24:45.677862 4822 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9ed7a6b939504e2a46d5adbbb7de5c06f8baf234d28be557aa5a9d58954f225c"} pod="openshift-machine-config-operator/machine-config-daemon-qd752" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 09:24:45 crc kubenswrapper[4822]: I0224 09:24:45.677907 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" containerID="cri-o://9ed7a6b939504e2a46d5adbbb7de5c06f8baf234d28be557aa5a9d58954f225c" gracePeriod=600 Feb 24 09:24:45 crc kubenswrapper[4822]: E0224 09:24:45.906453 4822 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 24 09:24:45 crc kubenswrapper[4822]: E0224 09:24:45.906764 4822 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hlnbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-z69ts_openstack(2153ae49-dd49-45fa-b8cc-00f2c44f744e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 09:24:45 crc kubenswrapper[4822]: E0224 09:24:45.908124 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-z69ts" podUID="2153ae49-dd49-45fa-b8cc-00f2c44f744e" Feb 24 09:24:45 crc kubenswrapper[4822]: I0224 09:24:45.954083 4822 generic.go:334] "Generic (PLEG): container finished" podID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerID="9ed7a6b939504e2a46d5adbbb7de5c06f8baf234d28be557aa5a9d58954f225c" exitCode=0 Feb 24 09:24:45 crc kubenswrapper[4822]: I0224 09:24:45.954205 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" event={"ID":"306aba52-0b6e-4d3f-b05f-757daebc5e24","Type":"ContainerDied","Data":"9ed7a6b939504e2a46d5adbbb7de5c06f8baf234d28be557aa5a9d58954f225c"} Feb 24 09:24:45 crc kubenswrapper[4822]: I0224 09:24:45.954288 4822 scope.go:117] "RemoveContainer" containerID="f5f1eb1caf8f3fa53d2384fafa78a76cc1e2aaee0a945eb5b651032f65068caf" Feb 24 09:24:45 crc kubenswrapper[4822]: E0224 09:24:45.962673 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-z69ts" podUID="2153ae49-dd49-45fa-b8cc-00f2c44f744e" Feb 24 09:24:45 crc kubenswrapper[4822]: E0224 09:24:45.972061 4822 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 24 09:24:45 crc kubenswrapper[4822]: E0224 09:24:45.972316 4822 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-46v4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-7zz57_openstack(ccca9e57-53c8-4d44-b4e9-2a6b4de5b566): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 09:24:45 crc kubenswrapper[4822]: E0224 09:24:45.981138 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-7zz57" podUID="ccca9e57-53c8-4d44-b4e9-2a6b4de5b566" Feb 24 09:24:46 crc kubenswrapper[4822]: E0224 09:24:46.015582 4822 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 24 09:24:46 crc kubenswrapper[4822]: E0224 09:24:46.015781 4822 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tblc5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-f6v77_openstack(e2e5df84-08e0-467b-bde7-18f191e1bcec): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 09:24:46 crc kubenswrapper[4822]: E0224 09:24:46.016991 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-f6v77" podUID="e2e5df84-08e0-467b-bde7-18f191e1bcec" Feb 24 09:24:46 crc kubenswrapper[4822]: I0224 09:24:46.141294 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-h2nqd" Feb 24 09:24:46 crc kubenswrapper[4822]: I0224 09:24:46.279382 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2aa41337-70e9-4de2-a201-35da03417af7-dns-svc\") pod \"2aa41337-70e9-4de2-a201-35da03417af7\" (UID: \"2aa41337-70e9-4de2-a201-35da03417af7\") " Feb 24 09:24:46 crc kubenswrapper[4822]: I0224 09:24:46.279472 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhj6s\" (UniqueName: \"kubernetes.io/projected/2aa41337-70e9-4de2-a201-35da03417af7-kube-api-access-xhj6s\") pod \"2aa41337-70e9-4de2-a201-35da03417af7\" (UID: \"2aa41337-70e9-4de2-a201-35da03417af7\") " Feb 24 09:24:46 crc kubenswrapper[4822]: I0224 09:24:46.279516 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2aa41337-70e9-4de2-a201-35da03417af7-config\") pod \"2aa41337-70e9-4de2-a201-35da03417af7\" (UID: \"2aa41337-70e9-4de2-a201-35da03417af7\") " Feb 24 09:24:46 crc kubenswrapper[4822]: I0224 09:24:46.281177 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2aa41337-70e9-4de2-a201-35da03417af7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2aa41337-70e9-4de2-a201-35da03417af7" (UID: "2aa41337-70e9-4de2-a201-35da03417af7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:24:46 crc kubenswrapper[4822]: I0224 09:24:46.285141 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2aa41337-70e9-4de2-a201-35da03417af7-config" (OuterVolumeSpecName: "config") pod "2aa41337-70e9-4de2-a201-35da03417af7" (UID: "2aa41337-70e9-4de2-a201-35da03417af7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:24:46 crc kubenswrapper[4822]: I0224 09:24:46.296130 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aa41337-70e9-4de2-a201-35da03417af7-kube-api-access-xhj6s" (OuterVolumeSpecName: "kube-api-access-xhj6s") pod "2aa41337-70e9-4de2-a201-35da03417af7" (UID: "2aa41337-70e9-4de2-a201-35da03417af7"). InnerVolumeSpecName "kube-api-access-xhj6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:24:46 crc kubenswrapper[4822]: I0224 09:24:46.381959 4822 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2aa41337-70e9-4de2-a201-35da03417af7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 24 09:24:46 crc kubenswrapper[4822]: I0224 09:24:46.382577 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhj6s\" (UniqueName: \"kubernetes.io/projected/2aa41337-70e9-4de2-a201-35da03417af7-kube-api-access-xhj6s\") on node \"crc\" DevicePath \"\"" Feb 24 09:24:46 crc kubenswrapper[4822]: I0224 09:24:46.382590 4822 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2aa41337-70e9-4de2-a201-35da03417af7-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:24:46 crc kubenswrapper[4822]: I0224 09:24:46.409436 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 24 09:24:46 crc kubenswrapper[4822]: I0224 09:24:46.668519 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 24 09:24:46 crc kubenswrapper[4822]: W0224 09:24:46.671448 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42ee9718_560e_43c6_ab6d_ac104bb72e56.slice/crio-06dac982930840e1c6a163ad90bb927c28105cdf57bd3bf51411fd39c9e64c13 WatchSource:0}: Error finding container 06dac982930840e1c6a163ad90bb927c28105cdf57bd3bf51411fd39c9e64c13: Status 404 returned error can't find the container with id 06dac982930840e1c6a163ad90bb927c28105cdf57bd3bf51411fd39c9e64c13 Feb 24 09:24:46 crc kubenswrapper[4822]: I0224 09:24:46.770263 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 24 09:24:46 crc kubenswrapper[4822]: I0224 09:24:46.793232 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 24 09:24:46 crc kubenswrapper[4822]: I0224 09:24:46.873032 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 24 09:24:46 crc kubenswrapper[4822]: W0224 09:24:46.875314 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce6c3961_b502_497c_9e1f_060d93d96768.slice/crio-b226740218118f9c7b5dd5f5159908580b33bd4749b119710a9b2ed1b82aae3c WatchSource:0}: Error finding container b226740218118f9c7b5dd5f5159908580b33bd4749b119710a9b2ed1b82aae3c: Status 404 returned error can't find the container with id b226740218118f9c7b5dd5f5159908580b33bd4749b119710a9b2ed1b82aae3c Feb 24 09:24:46 crc kubenswrapper[4822]: I0224 09:24:46.924689 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kp7bj"] Feb 24 09:24:46 crc kubenswrapper[4822]: W0224 09:24:46.925824 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33ac8d7e_339a_4b57_8d8d_38393ee4f9ce.slice/crio-c63a81b1f818d82b93502f753e361719f09ee05ffcde8389fe1105d138d38988 WatchSource:0}: Error finding container c63a81b1f818d82b93502f753e361719f09ee05ffcde8389fe1105d138d38988: Status 404 returned error can't find the container with id c63a81b1f818d82b93502f753e361719f09ee05ffcde8389fe1105d138d38988 Feb 24 09:24:46 crc kubenswrapper[4822]: I0224 09:24:46.960987 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ce6c3961-b502-497c-9e1f-060d93d96768","Type":"ContainerStarted","Data":"b226740218118f9c7b5dd5f5159908580b33bd4749b119710a9b2ed1b82aae3c"} Feb 24 09:24:46 crc kubenswrapper[4822]: I0224 09:24:46.961984 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ebd1051f-ff62-47e1-ae8f-0343453e1544","Type":"ContainerStarted","Data":"9a560bb13afa274003f0812b2eacec96d969217b76a905b8e1dd7df1f42bc7c7"} Feb 24 09:24:46 crc kubenswrapper[4822]: I0224 09:24:46.962817 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-h2nqd" event={"ID":"2aa41337-70e9-4de2-a201-35da03417af7","Type":"ContainerDied","Data":"a495480aa0ada8dc7b3a34e6106fbd0dade4a515b59a2d3d4f6662f40aeb3a59"} Feb 24 09:24:46 crc kubenswrapper[4822]: I0224 09:24:46.962851 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-h2nqd" Feb 24 09:24:46 crc kubenswrapper[4822]: I0224 09:24:46.964399 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3ff049ae-9abb-4477-9f51-eee7228cedfd","Type":"ContainerStarted","Data":"4b49cfd3f914e292d84cbad691172842d85b7cce2cd9163abbb91185fc5855d7"} Feb 24 09:24:46 crc kubenswrapper[4822]: I0224 09:24:46.966177 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"42ee9718-560e-43c6-ab6d-ac104bb72e56","Type":"ContainerStarted","Data":"06dac982930840e1c6a163ad90bb927c28105cdf57bd3bf51411fd39c9e64c13"} Feb 24 09:24:46 crc kubenswrapper[4822]: I0224 09:24:46.967804 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kp7bj" event={"ID":"33ac8d7e-339a-4b57-8d8d-38393ee4f9ce","Type":"ContainerStarted","Data":"c63a81b1f818d82b93502f753e361719f09ee05ffcde8389fe1105d138d38988"} Feb 24 09:24:46 crc kubenswrapper[4822]: I0224 09:24:46.968817 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"92cdd045-d60c-433d-b2e3-32f93299ee8e","Type":"ContainerStarted","Data":"28c60dad3d5289a72c45cb3451bf7a0b57da44dc19fb21e39abbd86753ac960f"} Feb 24 09:24:46 crc kubenswrapper[4822]: I0224 09:24:46.971081 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" event={"ID":"306aba52-0b6e-4d3f-b05f-757daebc5e24","Type":"ContainerStarted","Data":"54c4238ba3e15211ebc3dfc64c33fca8f6ffee22714455185e3ed60742e4b1d3"} Feb 24 09:24:46 crc kubenswrapper[4822]: E0224 09:24:46.972574 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-7zz57" podUID="ccca9e57-53c8-4d44-b4e9-2a6b4de5b566" Feb 24 09:24:47 crc kubenswrapper[4822]: I0224 09:24:47.013453 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 24 09:24:47 crc kubenswrapper[4822]: I0224 09:24:47.173250 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-h2nqd"] Feb 24 09:24:47 crc kubenswrapper[4822]: I0224 09:24:47.183523 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-h2nqd"] Feb 24 09:24:47 crc kubenswrapper[4822]: I0224 09:24:47.297122 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-f6v77" Feb 24 09:24:47 crc kubenswrapper[4822]: I0224 09:24:47.402475 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tblc5\" (UniqueName: \"kubernetes.io/projected/e2e5df84-08e0-467b-bde7-18f191e1bcec-kube-api-access-tblc5\") pod \"e2e5df84-08e0-467b-bde7-18f191e1bcec\" (UID: \"e2e5df84-08e0-467b-bde7-18f191e1bcec\") " Feb 24 09:24:47 crc kubenswrapper[4822]: I0224 09:24:47.402748 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2e5df84-08e0-467b-bde7-18f191e1bcec-config\") pod \"e2e5df84-08e0-467b-bde7-18f191e1bcec\" (UID: \"e2e5df84-08e0-467b-bde7-18f191e1bcec\") " Feb 24 09:24:47 crc kubenswrapper[4822]: I0224 09:24:47.403426 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2e5df84-08e0-467b-bde7-18f191e1bcec-config" (OuterVolumeSpecName: "config") pod "e2e5df84-08e0-467b-bde7-18f191e1bcec" (UID: "e2e5df84-08e0-467b-bde7-18f191e1bcec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:24:47 crc kubenswrapper[4822]: I0224 09:24:47.407661 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2e5df84-08e0-467b-bde7-18f191e1bcec-kube-api-access-tblc5" (OuterVolumeSpecName: "kube-api-access-tblc5") pod "e2e5df84-08e0-467b-bde7-18f191e1bcec" (UID: "e2e5df84-08e0-467b-bde7-18f191e1bcec"). InnerVolumeSpecName "kube-api-access-tblc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:24:47 crc kubenswrapper[4822]: I0224 09:24:47.504851 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tblc5\" (UniqueName: \"kubernetes.io/projected/e2e5df84-08e0-467b-bde7-18f191e1bcec-kube-api-access-tblc5\") on node \"crc\" DevicePath \"\"" Feb 24 09:24:47 crc kubenswrapper[4822]: I0224 09:24:47.504880 4822 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2e5df84-08e0-467b-bde7-18f191e1bcec-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:24:47 crc kubenswrapper[4822]: I0224 09:24:47.909954 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-tmwwp"] Feb 24 09:24:47 crc kubenswrapper[4822]: I0224 09:24:47.989960 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"531a3082-e3fe-40f7-b7ca-63b78cbb3fcd","Type":"ContainerStarted","Data":"77311c7ea21633a2559b5bfbd2692e56b00c50a5e2d58a6c0dee411d6de57a4e"} Feb 24 09:24:47 crc kubenswrapper[4822]: I0224 09:24:47.992087 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f5d244a0-7be8-4ea4-b8aa-d4d461cb1146","Type":"ContainerStarted","Data":"a4482afe55783bdd9eb8f679ae8239fb65c96f27296c7b916c66bc00a02bf402"} Feb 24 09:24:48 crc kubenswrapper[4822]: I0224 09:24:48.004256 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-f6v77" Feb 24 09:24:48 crc kubenswrapper[4822]: I0224 09:24:48.004277 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-f6v77" event={"ID":"e2e5df84-08e0-467b-bde7-18f191e1bcec","Type":"ContainerDied","Data":"2c5292b8ee047ed46913eb675c0f2fced6316197f85dd99f374a03f4546df187"} Feb 24 09:24:48 crc kubenswrapper[4822]: I0224 09:24:48.028058 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11","Type":"ContainerStarted","Data":"e6bd616cd1448d22f313e38f220996445a159a52a7329544e4c3432bfe2c47a0"} Feb 24 09:24:48 crc kubenswrapper[4822]: I0224 09:24:48.070960 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-f6v77"] Feb 24 09:24:48 crc kubenswrapper[4822]: I0224 09:24:48.074304 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-f6v77"] Feb 24 09:24:48 crc kubenswrapper[4822]: W0224 09:24:48.083432 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2c3821d_0ccc_464b_93c1_6966504fb2c9.slice/crio-73763053e3a071d66b31f1da712d905c07e0f0e745487f69a8c14b725484b734 WatchSource:0}: Error finding container 73763053e3a071d66b31f1da712d905c07e0f0e745487f69a8c14b725484b734: Status 404 returned error can't find the container with id 73763053e3a071d66b31f1da712d905c07e0f0e745487f69a8c14b725484b734 Feb 24 09:24:48 crc kubenswrapper[4822]: I0224 09:24:48.353475 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2aa41337-70e9-4de2-a201-35da03417af7" path="/var/lib/kubelet/pods/2aa41337-70e9-4de2-a201-35da03417af7/volumes" Feb 24 09:24:48 crc kubenswrapper[4822]: I0224 09:24:48.354105 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2e5df84-08e0-467b-bde7-18f191e1bcec" path="/var/lib/kubelet/pods/e2e5df84-08e0-467b-bde7-18f191e1bcec/volumes" Feb 24 09:24:49 crc kubenswrapper[4822]: I0224 09:24:49.035844 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tmwwp" event={"ID":"e2c3821d-0ccc-464b-93c1-6966504fb2c9","Type":"ContainerStarted","Data":"73763053e3a071d66b31f1da712d905c07e0f0e745487f69a8c14b725484b734"} Feb 24 09:24:57 crc kubenswrapper[4822]: I0224 09:24:57.604323 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kp7bj" event={"ID":"33ac8d7e-339a-4b57-8d8d-38393ee4f9ce","Type":"ContainerStarted","Data":"e960e23edc1567a6c1b46f92f090eba9df7f30b5e345b04fd67185a87dc1b205"} Feb 24 09:24:57 crc kubenswrapper[4822]: I0224 09:24:57.604846 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-kp7bj" Feb 24 09:24:57 crc kubenswrapper[4822]: I0224 09:24:57.607355 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"92cdd045-d60c-433d-b2e3-32f93299ee8e","Type":"ContainerStarted","Data":"3fb2f25a823cf4f1505a14cda62cb5c1441ea570567b17c256afb513b695ab7a"} Feb 24 09:24:57 crc kubenswrapper[4822]: I0224 09:24:57.611298 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ce6c3961-b502-497c-9e1f-060d93d96768","Type":"ContainerStarted","Data":"49744035a4c526c6f1e9d0801276e166e2eb564fa7cef08e4ffd847e0ad74233"} Feb 24 09:24:57 crc kubenswrapper[4822]: I0224 09:24:57.624710 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ebd1051f-ff62-47e1-ae8f-0343453e1544","Type":"ContainerStarted","Data":"dccf9ce8aa5b30373fb3a9e6429b426c375b7b4c3ff3382da93f818d2e082218"} Feb 24 09:24:57 crc kubenswrapper[4822]: I0224 09:24:57.626831 4822 generic.go:334] "Generic (PLEG): container finished" podID="e2c3821d-0ccc-464b-93c1-6966504fb2c9" containerID="81382f083f784147cc8e9bad7057ef8b7bbde77181637d8ef61dee6d5f63b746" exitCode=0 Feb 24 09:24:57 crc kubenswrapper[4822]: I0224 09:24:57.627176 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tmwwp" event={"ID":"e2c3821d-0ccc-464b-93c1-6966504fb2c9","Type":"ContainerDied","Data":"81382f083f784147cc8e9bad7057ef8b7bbde77181637d8ef61dee6d5f63b746"} Feb 24 09:24:57 crc kubenswrapper[4822]: I0224 09:24:57.638525 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"531a3082-e3fe-40f7-b7ca-63b78cbb3fcd","Type":"ContainerStarted","Data":"c0b5b8da41b6702ca75ce7b8b5fb146b0bbda2dfbdbeeba761f960dc7d62b4b7"} Feb 24 09:24:57 crc kubenswrapper[4822]: I0224 09:24:57.643100 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3ff049ae-9abb-4477-9f51-eee7228cedfd","Type":"ContainerStarted","Data":"f32c98ab2588772fcc9ba8d953e16671b6da925a4568877710d3fd67911e6581"} Feb 24 09:24:57 crc kubenswrapper[4822]: I0224 09:24:57.649124 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"42ee9718-560e-43c6-ab6d-ac104bb72e56","Type":"ContainerStarted","Data":"9034f0c898b3baf167d12a3e7b7c20e45e301c84475b9df95ffe22e6871a9e2b"} Feb 24 09:24:57 crc kubenswrapper[4822]: I0224 09:24:57.649367 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 24 09:24:57 crc kubenswrapper[4822]: I0224 09:24:57.669184 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-kp7bj" podStartSLOduration=10.100009336 podStartE2EDuration="19.669159763s" podCreationTimestamp="2026-02-24 09:24:38 +0000 UTC" firstStartedPulling="2026-02-24 09:24:46.927674892 +0000 UTC m=+1009.315437440" lastFinishedPulling="2026-02-24 09:24:56.496825319 +0000 UTC m=+1018.884587867" observedRunningTime="2026-02-24 09:24:57.63754665 +0000 UTC m=+1020.025309218" watchObservedRunningTime="2026-02-24 09:24:57.669159763 +0000 UTC m=+1020.056922321" Feb 24 09:24:57 crc kubenswrapper[4822]: I0224 09:24:57.749164 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=16.088940174 podStartE2EDuration="25.7491143s" podCreationTimestamp="2026-02-24 09:24:32 +0000 UTC" firstStartedPulling="2026-02-24 09:24:46.781637714 +0000 UTC m=+1009.169400262" lastFinishedPulling="2026-02-24 09:24:56.44181184 +0000 UTC m=+1018.829574388" observedRunningTime="2026-02-24 09:24:57.717762014 +0000 UTC m=+1020.105524572" watchObservedRunningTime="2026-02-24 09:24:57.7491143 +0000 UTC m=+1020.136876858" Feb 24 09:24:57 crc kubenswrapper[4822]: I0224 09:24:57.774419 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=12.930589013 podStartE2EDuration="22.774400436s" podCreationTimestamp="2026-02-24 09:24:35 +0000 UTC" firstStartedPulling="2026-02-24 09:24:46.676249278 +0000 UTC m=+1009.064011836" lastFinishedPulling="2026-02-24 09:24:56.520060681 +0000 UTC m=+1018.907823259" observedRunningTime="2026-02-24 09:24:57.764763892 +0000 UTC m=+1020.152526450" watchObservedRunningTime="2026-02-24 09:24:57.774400436 +0000 UTC m=+1020.162162984" Feb 24 09:24:58 crc kubenswrapper[4822]: I0224 09:24:58.349959 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 24 09:24:58 crc kubenswrapper[4822]: I0224 09:24:58.659069 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tmwwp" event={"ID":"e2c3821d-0ccc-464b-93c1-6966504fb2c9","Type":"ContainerStarted","Data":"3b5152e8224cafeb834d959e58758d10766a5559c983524fa02b2b8930f36db7"} Feb 24 09:24:58 crc kubenswrapper[4822]: I0224 09:24:58.659502 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tmwwp" event={"ID":"e2c3821d-0ccc-464b-93c1-6966504fb2c9","Type":"ContainerStarted","Data":"d3fbfec99210b561d340ebdd2d2d23ca02a36592c0352d6b38227c2f360a07cd"} Feb 24 09:24:58 crc kubenswrapper[4822]: I0224 09:24:58.686955 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-tmwwp" podStartSLOduration=12.303985726 podStartE2EDuration="20.686892582s" podCreationTimestamp="2026-02-24 09:24:38 +0000 UTC" firstStartedPulling="2026-02-24 09:24:48.103490048 +0000 UTC m=+1010.491252596" lastFinishedPulling="2026-02-24 09:24:56.486396874 +0000 UTC m=+1018.874159452" observedRunningTime="2026-02-24 09:24:58.682149307 +0000 UTC m=+1021.069911885" watchObservedRunningTime="2026-02-24 09:24:58.686892582 +0000 UTC m=+1021.074655180" Feb 24 09:24:58 crc kubenswrapper[4822]: I0224 09:24:58.911931 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-tmwwp" Feb 24 09:24:58 crc kubenswrapper[4822]: I0224 09:24:58.911984 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-tmwwp" Feb 24 09:25:00 crc kubenswrapper[4822]: I0224 09:25:00.679060 4822 generic.go:334] "Generic (PLEG): container finished" podID="92cdd045-d60c-433d-b2e3-32f93299ee8e" containerID="3fb2f25a823cf4f1505a14cda62cb5c1441ea570567b17c256afb513b695ab7a" exitCode=0 Feb 24 09:25:00 crc kubenswrapper[4822]: I0224 09:25:00.679337 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"92cdd045-d60c-433d-b2e3-32f93299ee8e","Type":"ContainerDied","Data":"3fb2f25a823cf4f1505a14cda62cb5c1441ea570567b17c256afb513b695ab7a"} Feb 24 09:25:00 crc kubenswrapper[4822]: I0224 09:25:00.688984 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ce6c3961-b502-497c-9e1f-060d93d96768","Type":"ContainerStarted","Data":"59de0a1fa05c9a563cf9813bdaf72c071cb36468c1d83d447ee33e956168b1ba"} Feb 24 09:25:00 crc kubenswrapper[4822]: I0224 09:25:00.697090 4822 generic.go:334] "Generic (PLEG): container finished" podID="2153ae49-dd49-45fa-b8cc-00f2c44f744e" containerID="94781c525c4603513ee3e4e9ed01a17c933ecf854f966e8e7bb42603a08b1d79" exitCode=0 Feb 24 09:25:00 crc kubenswrapper[4822]: I0224 09:25:00.697225 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-z69ts" event={"ID":"2153ae49-dd49-45fa-b8cc-00f2c44f744e","Type":"ContainerDied","Data":"94781c525c4603513ee3e4e9ed01a17c933ecf854f966e8e7bb42603a08b1d79"} Feb 24 09:25:00 crc kubenswrapper[4822]: I0224 09:25:00.700970 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"531a3082-e3fe-40f7-b7ca-63b78cbb3fcd","Type":"ContainerStarted","Data":"7bdb8db9a85de515247dc52d5cc2426e43cc44614128e7c62bf1212e1e0b0bae"} Feb 24 09:25:00 crc kubenswrapper[4822]: I0224 09:25:00.704027 4822 generic.go:334] "Generic (PLEG): container finished" podID="3ff049ae-9abb-4477-9f51-eee7228cedfd" containerID="f32c98ab2588772fcc9ba8d953e16671b6da925a4568877710d3fd67911e6581" exitCode=0 Feb 24 09:25:00 crc kubenswrapper[4822]: I0224 09:25:00.704079 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3ff049ae-9abb-4477-9f51-eee7228cedfd","Type":"ContainerDied","Data":"f32c98ab2588772fcc9ba8d953e16671b6da925a4568877710d3fd67911e6581"} Feb 24 09:25:00 crc kubenswrapper[4822]: I0224 09:25:00.788236 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=8.260004522 podStartE2EDuration="20.788215936s" podCreationTimestamp="2026-02-24 09:24:40 +0000 UTC" firstStartedPulling="2026-02-24 09:24:47.001103918 +0000 UTC m=+1009.388866466" lastFinishedPulling="2026-02-24 09:24:59.529315302 +0000 UTC m=+1021.917077880" observedRunningTime="2026-02-24 09:25:00.778612554 +0000 UTC m=+1023.166375122" watchObservedRunningTime="2026-02-24 09:25:00.788215936 +0000 UTC m=+1023.175978494" Feb 24 09:25:00 crc kubenswrapper[4822]: I0224 09:25:00.812700 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=7.162370849 podStartE2EDuration="19.812679211s" podCreationTimestamp="2026-02-24 09:24:41 +0000 UTC" firstStartedPulling="2026-02-24 09:24:46.876770671 +0000 UTC m=+1009.264533219" lastFinishedPulling="2026-02-24 09:24:59.527079023 +0000 UTC m=+1021.914841581" observedRunningTime="2026-02-24 09:25:00.802743899 +0000 UTC m=+1023.190506457" watchObservedRunningTime="2026-02-24 09:25:00.812679211 +0000 UTC m=+1023.200441769" Feb 24 09:25:01 crc kubenswrapper[4822]: I0224 09:25:01.717411 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"92cdd045-d60c-433d-b2e3-32f93299ee8e","Type":"ContainerStarted","Data":"85b53d5a5f9684bb70bb78e089b925297df077dac3839b5245ea55f59723a5ea"} Feb 24 09:25:01 crc kubenswrapper[4822]: I0224 09:25:01.720597 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-z69ts" event={"ID":"2153ae49-dd49-45fa-b8cc-00f2c44f744e","Type":"ContainerStarted","Data":"49fdc777fbd212a0838d9339111d6848b35426d07e6ed2109780a4295a29d0e0"} Feb 24 09:25:01 crc kubenswrapper[4822]: I0224 09:25:01.720788 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-z69ts" Feb 24 09:25:01 crc kubenswrapper[4822]: I0224 09:25:01.723934 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3ff049ae-9abb-4477-9f51-eee7228cedfd","Type":"ContainerStarted","Data":"33c66df3d646804c90bd8a401cc8cc3214693c58479f3e9432107f6727c5c8bc"} Feb 24 09:25:01 crc kubenswrapper[4822]: I0224 09:25:01.756268 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=22.008216066 podStartE2EDuration="31.756243617s" podCreationTimestamp="2026-02-24 09:24:30 +0000 UTC" firstStartedPulling="2026-02-24 09:24:46.782182068 +0000 UTC m=+1009.169944606" lastFinishedPulling="2026-02-24 09:24:56.530209609 +0000 UTC m=+1018.917972157" observedRunningTime="2026-02-24 09:25:01.745509754 +0000 UTC m=+1024.133272322" watchObservedRunningTime="2026-02-24 09:25:01.756243617 +0000 UTC m=+1024.144006195" Feb 24 09:25:01 crc kubenswrapper[4822]: I0224 09:25:01.772695 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=20.700973399 podStartE2EDuration="30.772663949s" podCreationTimestamp="2026-02-24 09:24:31 +0000 UTC" firstStartedPulling="2026-02-24 09:24:46.41831918 +0000 UTC m=+1008.806081728" lastFinishedPulling="2026-02-24 09:24:56.49000973 +0000 UTC m=+1018.877772278" observedRunningTime="2026-02-24 09:25:01.770597965 +0000 UTC m=+1024.158360603" watchObservedRunningTime="2026-02-24 09:25:01.772663949 +0000 UTC m=+1024.160426527" Feb 24 09:25:01 crc kubenswrapper[4822]: I0224 09:25:01.793961 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-z69ts" podStartSLOduration=3.642144508 podStartE2EDuration="33.79393808s" podCreationTimestamp="2026-02-24 09:24:28 +0000 UTC" firstStartedPulling="2026-02-24 09:24:29.584564616 +0000 UTC m=+991.972327164" lastFinishedPulling="2026-02-24 09:24:59.736358168 +0000 UTC m=+1022.124120736" observedRunningTime="2026-02-24 09:25:01.789006039 +0000 UTC m=+1024.176768597" watchObservedRunningTime="2026-02-24 09:25:01.79393808 +0000 UTC m=+1024.181700638" Feb 24 09:25:02 crc kubenswrapper[4822]: I0224 09:25:02.191357 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 24 09:25:02 crc kubenswrapper[4822]: I0224 09:25:02.317132 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39250: no serving certificate available for the kubelet" Feb 24 09:25:02 crc kubenswrapper[4822]: I0224 09:25:02.536188 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 24 09:25:02 crc kubenswrapper[4822]: I0224 09:25:02.732993 4822 generic.go:334] "Generic (PLEG): container finished" podID="ccca9e57-53c8-4d44-b4e9-2a6b4de5b566" containerID="5e6faf88ee5a6c830a862b325ea6e9cee20fb5a7034f1a08021ced78b18c4ea8" exitCode=0 Feb 24 09:25:02 crc kubenswrapper[4822]: I0224 09:25:02.733143 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-7zz57" event={"ID":"ccca9e57-53c8-4d44-b4e9-2a6b4de5b566","Type":"ContainerDied","Data":"5e6faf88ee5a6c830a862b325ea6e9cee20fb5a7034f1a08021ced78b18c4ea8"} Feb 24 09:25:03 crc kubenswrapper[4822]: I0224 09:25:03.057961 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 24 09:25:03 crc kubenswrapper[4822]: I0224 09:25:03.058591 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 24 09:25:03 crc kubenswrapper[4822]: I0224 09:25:03.192010 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 24 09:25:03 crc kubenswrapper[4822]: I0224 09:25:03.246685 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 24 09:25:03 crc kubenswrapper[4822]: I0224 09:25:03.342099 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 24 09:25:03 crc kubenswrapper[4822]: I0224 09:25:03.537329 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 24 09:25:03 crc kubenswrapper[4822]: I0224 09:25:03.579255 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39260: no serving certificate available for the kubelet" Feb 24 09:25:03 crc kubenswrapper[4822]: I0224 09:25:03.598447 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 24 09:25:03 crc kubenswrapper[4822]: I0224 09:25:03.743781 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-7zz57" event={"ID":"ccca9e57-53c8-4d44-b4e9-2a6b4de5b566","Type":"ContainerStarted","Data":"5eabb795e7c87b1136eddf139c3adad8c804712f4f7d056ba689b47c86bb98c6"} Feb 24 09:25:03 crc kubenswrapper[4822]: I0224 09:25:03.781753 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 24 09:25:03 crc kubenswrapper[4822]: I0224 09:25:03.781839 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 24 09:25:03 crc kubenswrapper[4822]: I0224 09:25:03.805522 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-7zz57" podStartSLOduration=-9223372001.049269 podStartE2EDuration="35.805507248s" podCreationTimestamp="2026-02-24 09:24:28 +0000 UTC" firstStartedPulling="2026-02-24 09:24:29.464163553 +0000 UTC m=+991.851926101" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:25:03.763173223 +0000 UTC m=+1026.150935771" watchObservedRunningTime="2026-02-24 09:25:03.805507248 +0000 UTC m=+1026.193269786" Feb 24 09:25:03 crc kubenswrapper[4822]: I0224 09:25:03.817751 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39272: no serving certificate available for the kubelet" Feb 24 09:25:03 crc kubenswrapper[4822]: I0224 09:25:03.820970 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-7zz57" Feb 24 09:25:03 crc kubenswrapper[4822]: I0224 09:25:03.971739 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-z69ts"] Feb 24 09:25:03 crc kubenswrapper[4822]: I0224 09:25:03.972011 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-z69ts" podUID="2153ae49-dd49-45fa-b8cc-00f2c44f744e" containerName="dnsmasq-dns" containerID="cri-o://49fdc777fbd212a0838d9339111d6848b35426d07e6ed2109780a4295a29d0e0" gracePeriod=10 Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.004522 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-w2cjf"] Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.005982 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-w2cjf" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.015198 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.019405 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-w2cjf"] Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.109766 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-6kl2p"] Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.110907 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-6kl2p" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.112665 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.117306 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-w2cjf\" (UID: \"526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b\") " pod="openstack/dnsmasq-dns-6bc7876d45-w2cjf" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.117457 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqfzw\" (UniqueName: \"kubernetes.io/projected/526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b-kube-api-access-wqfzw\") pod \"dnsmasq-dns-6bc7876d45-w2cjf\" (UID: \"526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b\") " pod="openstack/dnsmasq-dns-6bc7876d45-w2cjf" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.117545 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-w2cjf\" (UID: \"526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b\") " pod="openstack/dnsmasq-dns-6bc7876d45-w2cjf" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.117637 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b-config\") pod \"dnsmasq-dns-6bc7876d45-w2cjf\" (UID: \"526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b\") " pod="openstack/dnsmasq-dns-6bc7876d45-w2cjf" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.144419 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-6kl2p"] Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.184686 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7zz57"] Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.206789 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-w86b7"] Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.207925 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-w86b7" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.235632 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b-config\") pod \"dnsmasq-dns-6bc7876d45-w2cjf\" (UID: \"526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b\") " pod="openstack/dnsmasq-dns-6bc7876d45-w2cjf" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.235814 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a55a5b26-a75e-4a25-aa08-866c352baed5-combined-ca-bundle\") pod \"ovn-controller-metrics-6kl2p\" (UID: \"a55a5b26-a75e-4a25-aa08-866c352baed5\") " pod="openstack/ovn-controller-metrics-6kl2p" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.235921 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-w2cjf\" (UID: \"526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b\") " pod="openstack/dnsmasq-dns-6bc7876d45-w2cjf" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.236100 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-978l5\" (UniqueName: \"kubernetes.io/projected/a55a5b26-a75e-4a25-aa08-866c352baed5-kube-api-access-978l5\") pod \"ovn-controller-metrics-6kl2p\" (UID: \"a55a5b26-a75e-4a25-aa08-866c352baed5\") " pod="openstack/ovn-controller-metrics-6kl2p" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.236144 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a55a5b26-a75e-4a25-aa08-866c352baed5-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-6kl2p\" (UID: \"a55a5b26-a75e-4a25-aa08-866c352baed5\") " pod="openstack/ovn-controller-metrics-6kl2p" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.236195 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a55a5b26-a75e-4a25-aa08-866c352baed5-config\") pod \"ovn-controller-metrics-6kl2p\" (UID: \"a55a5b26-a75e-4a25-aa08-866c352baed5\") " pod="openstack/ovn-controller-metrics-6kl2p" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.236303 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqfzw\" (UniqueName: \"kubernetes.io/projected/526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b-kube-api-access-wqfzw\") pod \"dnsmasq-dns-6bc7876d45-w2cjf\" (UID: \"526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b\") " pod="openstack/dnsmasq-dns-6bc7876d45-w2cjf" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.236328 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a55a5b26-a75e-4a25-aa08-866c352baed5-ovn-rundir\") pod \"ovn-controller-metrics-6kl2p\" (UID: \"a55a5b26-a75e-4a25-aa08-866c352baed5\") " pod="openstack/ovn-controller-metrics-6kl2p" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.236415 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-w2cjf\" (UID: \"526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b\") " pod="openstack/dnsmasq-dns-6bc7876d45-w2cjf" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.236526 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a55a5b26-a75e-4a25-aa08-866c352baed5-ovs-rundir\") pod \"ovn-controller-metrics-6kl2p\" (UID: \"a55a5b26-a75e-4a25-aa08-866c352baed5\") " pod="openstack/ovn-controller-metrics-6kl2p" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.239804 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b-config\") pod \"dnsmasq-dns-6bc7876d45-w2cjf\" (UID: \"526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b\") " pod="openstack/dnsmasq-dns-6bc7876d45-w2cjf" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.240836 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-w2cjf\" (UID: \"526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b\") " pod="openstack/dnsmasq-dns-6bc7876d45-w2cjf" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.241739 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-w2cjf\" (UID: \"526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b\") " pod="openstack/dnsmasq-dns-6bc7876d45-w2cjf" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.243032 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.256476 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-w86b7"] Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.277043 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.278399 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.280555 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.280583 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.280823 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.282470 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-5bkdd" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.285394 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.295560 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqfzw\" (UniqueName: \"kubernetes.io/projected/526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b-kube-api-access-wqfzw\") pod \"dnsmasq-dns-6bc7876d45-w2cjf\" (UID: \"526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b\") " pod="openstack/dnsmasq-dns-6bc7876d45-w2cjf" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.341246 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a55a5b26-a75e-4a25-aa08-866c352baed5-combined-ca-bundle\") pod \"ovn-controller-metrics-6kl2p\" (UID: \"a55a5b26-a75e-4a25-aa08-866c352baed5\") " pod="openstack/ovn-controller-metrics-6kl2p" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.341440 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/165384c0-7e5e-45a4-bdb5-2ab821dd7ff1-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-w86b7\" (UID: \"165384c0-7e5e-45a4-bdb5-2ab821dd7ff1\") " pod="openstack/dnsmasq-dns-8554648995-w86b7" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.341463 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/165384c0-7e5e-45a4-bdb5-2ab821dd7ff1-config\") pod \"dnsmasq-dns-8554648995-w86b7\" (UID: \"165384c0-7e5e-45a4-bdb5-2ab821dd7ff1\") " pod="openstack/dnsmasq-dns-8554648995-w86b7" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.341478 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/165384c0-7e5e-45a4-bdb5-2ab821dd7ff1-dns-svc\") pod \"dnsmasq-dns-8554648995-w86b7\" (UID: \"165384c0-7e5e-45a4-bdb5-2ab821dd7ff1\") " pod="openstack/dnsmasq-dns-8554648995-w86b7" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.341499 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qbqb\" (UniqueName: \"kubernetes.io/projected/165384c0-7e5e-45a4-bdb5-2ab821dd7ff1-kube-api-access-2qbqb\") pod \"dnsmasq-dns-8554648995-w86b7\" (UID: \"165384c0-7e5e-45a4-bdb5-2ab821dd7ff1\") " pod="openstack/dnsmasq-dns-8554648995-w86b7" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.341517 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-978l5\" (UniqueName: \"kubernetes.io/projected/a55a5b26-a75e-4a25-aa08-866c352baed5-kube-api-access-978l5\") pod \"ovn-controller-metrics-6kl2p\" (UID: \"a55a5b26-a75e-4a25-aa08-866c352baed5\") " pod="openstack/ovn-controller-metrics-6kl2p" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.341536 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a55a5b26-a75e-4a25-aa08-866c352baed5-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-6kl2p\" (UID: \"a55a5b26-a75e-4a25-aa08-866c352baed5\") " pod="openstack/ovn-controller-metrics-6kl2p" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.341559 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a55a5b26-a75e-4a25-aa08-866c352baed5-config\") pod \"ovn-controller-metrics-6kl2p\" (UID: \"a55a5b26-a75e-4a25-aa08-866c352baed5\") " pod="openstack/ovn-controller-metrics-6kl2p" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.341587 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a55a5b26-a75e-4a25-aa08-866c352baed5-ovn-rundir\") pod \"ovn-controller-metrics-6kl2p\" (UID: \"a55a5b26-a75e-4a25-aa08-866c352baed5\") " pod="openstack/ovn-controller-metrics-6kl2p" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.341617 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/165384c0-7e5e-45a4-bdb5-2ab821dd7ff1-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-w86b7\" (UID: \"165384c0-7e5e-45a4-bdb5-2ab821dd7ff1\") " pod="openstack/dnsmasq-dns-8554648995-w86b7" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.341636 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a55a5b26-a75e-4a25-aa08-866c352baed5-ovs-rundir\") pod \"ovn-controller-metrics-6kl2p\" (UID: \"a55a5b26-a75e-4a25-aa08-866c352baed5\") " pod="openstack/ovn-controller-metrics-6kl2p" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.341817 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a55a5b26-a75e-4a25-aa08-866c352baed5-ovs-rundir\") pod \"ovn-controller-metrics-6kl2p\" (UID: \"a55a5b26-a75e-4a25-aa08-866c352baed5\") " pod="openstack/ovn-controller-metrics-6kl2p" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.342976 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a55a5b26-a75e-4a25-aa08-866c352baed5-config\") pod \"ovn-controller-metrics-6kl2p\" (UID: \"a55a5b26-a75e-4a25-aa08-866c352baed5\") " pod="openstack/ovn-controller-metrics-6kl2p" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.343254 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a55a5b26-a75e-4a25-aa08-866c352baed5-ovn-rundir\") pod \"ovn-controller-metrics-6kl2p\" (UID: \"a55a5b26-a75e-4a25-aa08-866c352baed5\") " pod="openstack/ovn-controller-metrics-6kl2p" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.345877 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a55a5b26-a75e-4a25-aa08-866c352baed5-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-6kl2p\" (UID: \"a55a5b26-a75e-4a25-aa08-866c352baed5\") " pod="openstack/ovn-controller-metrics-6kl2p" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.347364 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a55a5b26-a75e-4a25-aa08-866c352baed5-combined-ca-bundle\") pod \"ovn-controller-metrics-6kl2p\" (UID: \"a55a5b26-a75e-4a25-aa08-866c352baed5\") " pod="openstack/ovn-controller-metrics-6kl2p" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.355972 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-978l5\" (UniqueName: \"kubernetes.io/projected/a55a5b26-a75e-4a25-aa08-866c352baed5-kube-api-access-978l5\") pod \"ovn-controller-metrics-6kl2p\" (UID: \"a55a5b26-a75e-4a25-aa08-866c352baed5\") " pod="openstack/ovn-controller-metrics-6kl2p" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.366958 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-w2cjf" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.434369 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-6kl2p" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.443362 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qbqb\" (UniqueName: \"kubernetes.io/projected/165384c0-7e5e-45a4-bdb5-2ab821dd7ff1-kube-api-access-2qbqb\") pod \"dnsmasq-dns-8554648995-w86b7\" (UID: \"165384c0-7e5e-45a4-bdb5-2ab821dd7ff1\") " pod="openstack/dnsmasq-dns-8554648995-w86b7" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.443400 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98m6p\" (UniqueName: \"kubernetes.io/projected/b9dbaf5c-c271-481f-9cc6-079c9d6256c0-kube-api-access-98m6p\") pod \"ovn-northd-0\" (UID: \"b9dbaf5c-c271-481f-9cc6-079c9d6256c0\") " pod="openstack/ovn-northd-0" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.443467 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9dbaf5c-c271-481f-9cc6-079c9d6256c0-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b9dbaf5c-c271-481f-9cc6-079c9d6256c0\") " pod="openstack/ovn-northd-0" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.443528 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b9dbaf5c-c271-481f-9cc6-079c9d6256c0-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b9dbaf5c-c271-481f-9cc6-079c9d6256c0\") " pod="openstack/ovn-northd-0" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.444204 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9dbaf5c-c271-481f-9cc6-079c9d6256c0-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b9dbaf5c-c271-481f-9cc6-079c9d6256c0\") " pod="openstack/ovn-northd-0" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.444269 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/165384c0-7e5e-45a4-bdb5-2ab821dd7ff1-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-w86b7\" (UID: \"165384c0-7e5e-45a4-bdb5-2ab821dd7ff1\") " pod="openstack/dnsmasq-dns-8554648995-w86b7" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.446035 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/165384c0-7e5e-45a4-bdb5-2ab821dd7ff1-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-w86b7\" (UID: \"165384c0-7e5e-45a4-bdb5-2ab821dd7ff1\") " pod="openstack/dnsmasq-dns-8554648995-w86b7" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.446124 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9dbaf5c-c271-481f-9cc6-079c9d6256c0-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b9dbaf5c-c271-481f-9cc6-079c9d6256c0\") " pod="openstack/ovn-northd-0" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.446236 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/165384c0-7e5e-45a4-bdb5-2ab821dd7ff1-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-w86b7\" (UID: \"165384c0-7e5e-45a4-bdb5-2ab821dd7ff1\") " pod="openstack/dnsmasq-dns-8554648995-w86b7" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.447011 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/165384c0-7e5e-45a4-bdb5-2ab821dd7ff1-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-w86b7\" (UID: \"165384c0-7e5e-45a4-bdb5-2ab821dd7ff1\") " pod="openstack/dnsmasq-dns-8554648995-w86b7" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.447856 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-z69ts" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.448215 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/165384c0-7e5e-45a4-bdb5-2ab821dd7ff1-config\") pod \"dnsmasq-dns-8554648995-w86b7\" (UID: \"165384c0-7e5e-45a4-bdb5-2ab821dd7ff1\") " pod="openstack/dnsmasq-dns-8554648995-w86b7" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.449090 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/165384c0-7e5e-45a4-bdb5-2ab821dd7ff1-dns-svc\") pod \"dnsmasq-dns-8554648995-w86b7\" (UID: \"165384c0-7e5e-45a4-bdb5-2ab821dd7ff1\") " pod="openstack/dnsmasq-dns-8554648995-w86b7" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.454568 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/165384c0-7e5e-45a4-bdb5-2ab821dd7ff1-config\") pod \"dnsmasq-dns-8554648995-w86b7\" (UID: \"165384c0-7e5e-45a4-bdb5-2ab821dd7ff1\") " pod="openstack/dnsmasq-dns-8554648995-w86b7" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.454645 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/165384c0-7e5e-45a4-bdb5-2ab821dd7ff1-dns-svc\") pod \"dnsmasq-dns-8554648995-w86b7\" (UID: \"165384c0-7e5e-45a4-bdb5-2ab821dd7ff1\") " pod="openstack/dnsmasq-dns-8554648995-w86b7" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.454701 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9dbaf5c-c271-481f-9cc6-079c9d6256c0-config\") pod \"ovn-northd-0\" (UID: \"b9dbaf5c-c271-481f-9cc6-079c9d6256c0\") " pod="openstack/ovn-northd-0" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.454717 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9dbaf5c-c271-481f-9cc6-079c9d6256c0-scripts\") pod \"ovn-northd-0\" (UID: \"b9dbaf5c-c271-481f-9cc6-079c9d6256c0\") " pod="openstack/ovn-northd-0" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.458560 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qbqb\" (UniqueName: \"kubernetes.io/projected/165384c0-7e5e-45a4-bdb5-2ab821dd7ff1-kube-api-access-2qbqb\") pod \"dnsmasq-dns-8554648995-w86b7\" (UID: \"165384c0-7e5e-45a4-bdb5-2ab821dd7ff1\") " pod="openstack/dnsmasq-dns-8554648995-w86b7" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.543829 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-w86b7" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.556142 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlnbd\" (UniqueName: \"kubernetes.io/projected/2153ae49-dd49-45fa-b8cc-00f2c44f744e-kube-api-access-hlnbd\") pod \"2153ae49-dd49-45fa-b8cc-00f2c44f744e\" (UID: \"2153ae49-dd49-45fa-b8cc-00f2c44f744e\") " Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.556218 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2153ae49-dd49-45fa-b8cc-00f2c44f744e-dns-svc\") pod \"2153ae49-dd49-45fa-b8cc-00f2c44f744e\" (UID: \"2153ae49-dd49-45fa-b8cc-00f2c44f744e\") " Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.556306 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2153ae49-dd49-45fa-b8cc-00f2c44f744e-config\") pod \"2153ae49-dd49-45fa-b8cc-00f2c44f744e\" (UID: \"2153ae49-dd49-45fa-b8cc-00f2c44f744e\") " Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.556580 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9dbaf5c-c271-481f-9cc6-079c9d6256c0-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b9dbaf5c-c271-481f-9cc6-079c9d6256c0\") " pod="openstack/ovn-northd-0" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.556631 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9dbaf5c-c271-481f-9cc6-079c9d6256c0-config\") pod \"ovn-northd-0\" (UID: \"b9dbaf5c-c271-481f-9cc6-079c9d6256c0\") " pod="openstack/ovn-northd-0" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.556649 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9dbaf5c-c271-481f-9cc6-079c9d6256c0-scripts\") pod \"ovn-northd-0\" (UID: \"b9dbaf5c-c271-481f-9cc6-079c9d6256c0\") " pod="openstack/ovn-northd-0" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.556673 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98m6p\" (UniqueName: \"kubernetes.io/projected/b9dbaf5c-c271-481f-9cc6-079c9d6256c0-kube-api-access-98m6p\") pod \"ovn-northd-0\" (UID: \"b9dbaf5c-c271-481f-9cc6-079c9d6256c0\") " pod="openstack/ovn-northd-0" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.556725 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9dbaf5c-c271-481f-9cc6-079c9d6256c0-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b9dbaf5c-c271-481f-9cc6-079c9d6256c0\") " pod="openstack/ovn-northd-0" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.556748 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b9dbaf5c-c271-481f-9cc6-079c9d6256c0-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b9dbaf5c-c271-481f-9cc6-079c9d6256c0\") " pod="openstack/ovn-northd-0" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.556804 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9dbaf5c-c271-481f-9cc6-079c9d6256c0-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b9dbaf5c-c271-481f-9cc6-079c9d6256c0\") " pod="openstack/ovn-northd-0" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.558832 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b9dbaf5c-c271-481f-9cc6-079c9d6256c0-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b9dbaf5c-c271-481f-9cc6-079c9d6256c0\") " pod="openstack/ovn-northd-0" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.558887 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9dbaf5c-c271-481f-9cc6-079c9d6256c0-config\") pod \"ovn-northd-0\" (UID: \"b9dbaf5c-c271-481f-9cc6-079c9d6256c0\") " pod="openstack/ovn-northd-0" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.559669 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9dbaf5c-c271-481f-9cc6-079c9d6256c0-scripts\") pod \"ovn-northd-0\" (UID: \"b9dbaf5c-c271-481f-9cc6-079c9d6256c0\") " pod="openstack/ovn-northd-0" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.560231 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9dbaf5c-c271-481f-9cc6-079c9d6256c0-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b9dbaf5c-c271-481f-9cc6-079c9d6256c0\") " pod="openstack/ovn-northd-0" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.565528 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9dbaf5c-c271-481f-9cc6-079c9d6256c0-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b9dbaf5c-c271-481f-9cc6-079c9d6256c0\") " pod="openstack/ovn-northd-0" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.565974 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2153ae49-dd49-45fa-b8cc-00f2c44f744e-kube-api-access-hlnbd" (OuterVolumeSpecName: "kube-api-access-hlnbd") pod "2153ae49-dd49-45fa-b8cc-00f2c44f744e" (UID: "2153ae49-dd49-45fa-b8cc-00f2c44f744e"). InnerVolumeSpecName "kube-api-access-hlnbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.566236 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9dbaf5c-c271-481f-9cc6-079c9d6256c0-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b9dbaf5c-c271-481f-9cc6-079c9d6256c0\") " pod="openstack/ovn-northd-0" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.581216 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98m6p\" (UniqueName: \"kubernetes.io/projected/b9dbaf5c-c271-481f-9cc6-079c9d6256c0-kube-api-access-98m6p\") pod \"ovn-northd-0\" (UID: \"b9dbaf5c-c271-481f-9cc6-079c9d6256c0\") " pod="openstack/ovn-northd-0" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.594590 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2153ae49-dd49-45fa-b8cc-00f2c44f744e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2153ae49-dd49-45fa-b8cc-00f2c44f744e" (UID: "2153ae49-dd49-45fa-b8cc-00f2c44f744e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.597718 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2153ae49-dd49-45fa-b8cc-00f2c44f744e-config" (OuterVolumeSpecName: "config") pod "2153ae49-dd49-45fa-b8cc-00f2c44f744e" (UID: "2153ae49-dd49-45fa-b8cc-00f2c44f744e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.618790 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.658769 4822 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2153ae49-dd49-45fa-b8cc-00f2c44f744e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.658818 4822 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2153ae49-dd49-45fa-b8cc-00f2c44f744e-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.658830 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlnbd\" (UniqueName: \"kubernetes.io/projected/2153ae49-dd49-45fa-b8cc-00f2c44f744e-kube-api-access-hlnbd\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.765261 4822 generic.go:334] "Generic (PLEG): container finished" podID="2153ae49-dd49-45fa-b8cc-00f2c44f744e" containerID="49fdc777fbd212a0838d9339111d6848b35426d07e6ed2109780a4295a29d0e0" exitCode=0 Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.765796 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-z69ts" event={"ID":"2153ae49-dd49-45fa-b8cc-00f2c44f744e","Type":"ContainerDied","Data":"49fdc777fbd212a0838d9339111d6848b35426d07e6ed2109780a4295a29d0e0"} Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.765848 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-z69ts" event={"ID":"2153ae49-dd49-45fa-b8cc-00f2c44f744e","Type":"ContainerDied","Data":"f47652baa26e49aaad80a5f9c797514b5bf9124a78c4f29074d9339f96611eb7"} Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.765868 4822 scope.go:117] "RemoveContainer" containerID="49fdc777fbd212a0838d9339111d6848b35426d07e6ed2109780a4295a29d0e0" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.765971 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-z69ts" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.789565 4822 scope.go:117] "RemoveContainer" containerID="94781c525c4603513ee3e4e9ed01a17c933ecf854f966e8e7bb42603a08b1d79" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.804331 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-w2cjf"] Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.813484 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-z69ts"] Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.819428 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-z69ts"] Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.832321 4822 scope.go:117] "RemoveContainer" containerID="49fdc777fbd212a0838d9339111d6848b35426d07e6ed2109780a4295a29d0e0" Feb 24 09:25:04 crc kubenswrapper[4822]: E0224 09:25:04.832713 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49fdc777fbd212a0838d9339111d6848b35426d07e6ed2109780a4295a29d0e0\": container with ID starting with 49fdc777fbd212a0838d9339111d6848b35426d07e6ed2109780a4295a29d0e0 not found: ID does not exist" containerID="49fdc777fbd212a0838d9339111d6848b35426d07e6ed2109780a4295a29d0e0" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.832824 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49fdc777fbd212a0838d9339111d6848b35426d07e6ed2109780a4295a29d0e0"} err="failed to get container status \"49fdc777fbd212a0838d9339111d6848b35426d07e6ed2109780a4295a29d0e0\": rpc error: code = NotFound desc = could not find container \"49fdc777fbd212a0838d9339111d6848b35426d07e6ed2109780a4295a29d0e0\": container with ID starting with 49fdc777fbd212a0838d9339111d6848b35426d07e6ed2109780a4295a29d0e0 not found: ID does not exist" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.832907 4822 scope.go:117] "RemoveContainer" containerID="94781c525c4603513ee3e4e9ed01a17c933ecf854f966e8e7bb42603a08b1d79" Feb 24 09:25:04 crc kubenswrapper[4822]: E0224 09:25:04.833449 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94781c525c4603513ee3e4e9ed01a17c933ecf854f966e8e7bb42603a08b1d79\": container with ID starting with 94781c525c4603513ee3e4e9ed01a17c933ecf854f966e8e7bb42603a08b1d79 not found: ID does not exist" containerID="94781c525c4603513ee3e4e9ed01a17c933ecf854f966e8e7bb42603a08b1d79" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.833493 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94781c525c4603513ee3e4e9ed01a17c933ecf854f966e8e7bb42603a08b1d79"} err="failed to get container status \"94781c525c4603513ee3e4e9ed01a17c933ecf854f966e8e7bb42603a08b1d79\": rpc error: code = NotFound desc = could not find container \"94781c525c4603513ee3e4e9ed01a17c933ecf854f966e8e7bb42603a08b1d79\": container with ID starting with 94781c525c4603513ee3e4e9ed01a17c933ecf854f966e8e7bb42603a08b1d79 not found: ID does not exist" Feb 24 09:25:04 crc kubenswrapper[4822]: I0224 09:25:04.898467 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-6kl2p"] Feb 24 09:25:05 crc kubenswrapper[4822]: I0224 09:25:05.034031 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-w86b7"] Feb 24 09:25:05 crc kubenswrapper[4822]: W0224 09:25:05.053285 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod165384c0_7e5e_45a4_bdb5_2ab821dd7ff1.slice/crio-0afdbf885aa4551cbb6efcc83e3a4d5ba7ca38dc4b91ec923003952241caca75 WatchSource:0}: Error finding container 0afdbf885aa4551cbb6efcc83e3a4d5ba7ca38dc4b91ec923003952241caca75: Status 404 returned error can't find the container with id 0afdbf885aa4551cbb6efcc83e3a4d5ba7ca38dc4b91ec923003952241caca75 Feb 24 09:25:05 crc kubenswrapper[4822]: I0224 09:25:05.092626 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 24 09:25:05 crc kubenswrapper[4822]: W0224 09:25:05.102192 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9dbaf5c_c271_481f_9cc6_079c9d6256c0.slice/crio-72f1b537704c8f704c81603aa3914903ab076b9690551c4f67abd0600a3561fd WatchSource:0}: Error finding container 72f1b537704c8f704c81603aa3914903ab076b9690551c4f67abd0600a3561fd: Status 404 returned error can't find the container with id 72f1b537704c8f704c81603aa3914903ab076b9690551c4f67abd0600a3561fd Feb 24 09:25:05 crc kubenswrapper[4822]: I0224 09:25:05.386710 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39286: no serving certificate available for the kubelet" Feb 24 09:25:05 crc kubenswrapper[4822]: I0224 09:25:05.394684 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-w2cjf"] Feb 24 09:25:05 crc kubenswrapper[4822]: I0224 09:25:05.424609 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-mz7t2"] Feb 24 09:25:05 crc kubenswrapper[4822]: E0224 09:25:05.424937 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2153ae49-dd49-45fa-b8cc-00f2c44f744e" containerName="dnsmasq-dns" Feb 24 09:25:05 crc kubenswrapper[4822]: I0224 09:25:05.424953 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="2153ae49-dd49-45fa-b8cc-00f2c44f744e" containerName="dnsmasq-dns" Feb 24 09:25:05 crc kubenswrapper[4822]: E0224 09:25:05.424992 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2153ae49-dd49-45fa-b8cc-00f2c44f744e" containerName="init" Feb 24 09:25:05 crc kubenswrapper[4822]: I0224 09:25:05.424997 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="2153ae49-dd49-45fa-b8cc-00f2c44f744e" containerName="init" Feb 24 09:25:05 crc kubenswrapper[4822]: I0224 09:25:05.425129 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="2153ae49-dd49-45fa-b8cc-00f2c44f744e" containerName="dnsmasq-dns" Feb 24 09:25:05 crc kubenswrapper[4822]: I0224 09:25:05.426170 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-mz7t2" Feb 24 09:25:05 crc kubenswrapper[4822]: I0224 09:25:05.437462 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-mz7t2"] Feb 24 09:25:05 crc kubenswrapper[4822]: I0224 09:25:05.489194 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 24 09:25:05 crc kubenswrapper[4822]: I0224 09:25:05.571350 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00667fc1-3e89-475c-84eb-b89736a8d50e-config\") pod \"dnsmasq-dns-b8fbc5445-mz7t2\" (UID: \"00667fc1-3e89-475c-84eb-b89736a8d50e\") " pod="openstack/dnsmasq-dns-b8fbc5445-mz7t2" Feb 24 09:25:05 crc kubenswrapper[4822]: I0224 09:25:05.571400 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00667fc1-3e89-475c-84eb-b89736a8d50e-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-mz7t2\" (UID: \"00667fc1-3e89-475c-84eb-b89736a8d50e\") " pod="openstack/dnsmasq-dns-b8fbc5445-mz7t2" Feb 24 09:25:05 crc kubenswrapper[4822]: I0224 09:25:05.571421 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00667fc1-3e89-475c-84eb-b89736a8d50e-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-mz7t2\" (UID: \"00667fc1-3e89-475c-84eb-b89736a8d50e\") " pod="openstack/dnsmasq-dns-b8fbc5445-mz7t2" Feb 24 09:25:05 crc kubenswrapper[4822]: I0224 09:25:05.571459 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm6nh\" (UniqueName: \"kubernetes.io/projected/00667fc1-3e89-475c-84eb-b89736a8d50e-kube-api-access-fm6nh\") pod \"dnsmasq-dns-b8fbc5445-mz7t2\" (UID: \"00667fc1-3e89-475c-84eb-b89736a8d50e\") " pod="openstack/dnsmasq-dns-b8fbc5445-mz7t2" Feb 24 09:25:05 crc kubenswrapper[4822]: I0224 09:25:05.571507 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00667fc1-3e89-475c-84eb-b89736a8d50e-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-mz7t2\" (UID: \"00667fc1-3e89-475c-84eb-b89736a8d50e\") " pod="openstack/dnsmasq-dns-b8fbc5445-mz7t2" Feb 24 09:25:05 crc kubenswrapper[4822]: I0224 09:25:05.673097 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00667fc1-3e89-475c-84eb-b89736a8d50e-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-mz7t2\" (UID: \"00667fc1-3e89-475c-84eb-b89736a8d50e\") " pod="openstack/dnsmasq-dns-b8fbc5445-mz7t2" Feb 24 09:25:05 crc kubenswrapper[4822]: I0224 09:25:05.673213 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00667fc1-3e89-475c-84eb-b89736a8d50e-config\") pod \"dnsmasq-dns-b8fbc5445-mz7t2\" (UID: \"00667fc1-3e89-475c-84eb-b89736a8d50e\") " pod="openstack/dnsmasq-dns-b8fbc5445-mz7t2" Feb 24 09:25:05 crc kubenswrapper[4822]: I0224 09:25:05.673241 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00667fc1-3e89-475c-84eb-b89736a8d50e-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-mz7t2\" (UID: \"00667fc1-3e89-475c-84eb-b89736a8d50e\") " pod="openstack/dnsmasq-dns-b8fbc5445-mz7t2" Feb 24 09:25:05 crc kubenswrapper[4822]: I0224 09:25:05.673257 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00667fc1-3e89-475c-84eb-b89736a8d50e-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-mz7t2\" (UID: \"00667fc1-3e89-475c-84eb-b89736a8d50e\") " pod="openstack/dnsmasq-dns-b8fbc5445-mz7t2" Feb 24 09:25:05 crc kubenswrapper[4822]: I0224 09:25:05.673293 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm6nh\" (UniqueName: \"kubernetes.io/projected/00667fc1-3e89-475c-84eb-b89736a8d50e-kube-api-access-fm6nh\") pod \"dnsmasq-dns-b8fbc5445-mz7t2\" (UID: \"00667fc1-3e89-475c-84eb-b89736a8d50e\") " pod="openstack/dnsmasq-dns-b8fbc5445-mz7t2" Feb 24 09:25:05 crc kubenswrapper[4822]: I0224 09:25:05.674040 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00667fc1-3e89-475c-84eb-b89736a8d50e-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-mz7t2\" (UID: \"00667fc1-3e89-475c-84eb-b89736a8d50e\") " pod="openstack/dnsmasq-dns-b8fbc5445-mz7t2" Feb 24 09:25:05 crc kubenswrapper[4822]: I0224 09:25:05.674399 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00667fc1-3e89-475c-84eb-b89736a8d50e-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-mz7t2\" (UID: \"00667fc1-3e89-475c-84eb-b89736a8d50e\") " pod="openstack/dnsmasq-dns-b8fbc5445-mz7t2" Feb 24 09:25:05 crc kubenswrapper[4822]: I0224 09:25:05.674614 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00667fc1-3e89-475c-84eb-b89736a8d50e-config\") pod \"dnsmasq-dns-b8fbc5445-mz7t2\" (UID: \"00667fc1-3e89-475c-84eb-b89736a8d50e\") " pod="openstack/dnsmasq-dns-b8fbc5445-mz7t2" Feb 24 09:25:05 crc kubenswrapper[4822]: I0224 09:25:05.674827 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00667fc1-3e89-475c-84eb-b89736a8d50e-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-mz7t2\" (UID: \"00667fc1-3e89-475c-84eb-b89736a8d50e\") " pod="openstack/dnsmasq-dns-b8fbc5445-mz7t2" Feb 24 09:25:05 crc kubenswrapper[4822]: I0224 09:25:05.689936 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm6nh\" (UniqueName: \"kubernetes.io/projected/00667fc1-3e89-475c-84eb-b89736a8d50e-kube-api-access-fm6nh\") pod \"dnsmasq-dns-b8fbc5445-mz7t2\" (UID: \"00667fc1-3e89-475c-84eb-b89736a8d50e\") " pod="openstack/dnsmasq-dns-b8fbc5445-mz7t2" Feb 24 09:25:05 crc kubenswrapper[4822]: I0224 09:25:05.753633 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-mz7t2" Feb 24 09:25:05 crc kubenswrapper[4822]: I0224 09:25:05.778424 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-6kl2p" event={"ID":"a55a5b26-a75e-4a25-aa08-866c352baed5","Type":"ContainerStarted","Data":"d1072e3224caf5845ee32b98eead00b7876c986e8a8c29812cc6aae318a2c066"} Feb 24 09:25:05 crc kubenswrapper[4822]: I0224 09:25:05.778465 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-6kl2p" event={"ID":"a55a5b26-a75e-4a25-aa08-866c352baed5","Type":"ContainerStarted","Data":"c2728bf779e5b73cb28949c1b47d978eb1e96e4fec6b4607d43cdee0eb2d7b8d"} Feb 24 09:25:05 crc kubenswrapper[4822]: I0224 09:25:05.786417 4822 generic.go:334] "Generic (PLEG): container finished" podID="526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b" containerID="f1af0a240394f894fae9a5fd23fc7032ee9e229b4d63e9bc48dac4fcee06ffdb" exitCode=0 Feb 24 09:25:05 crc kubenswrapper[4822]: I0224 09:25:05.786517 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-w2cjf" event={"ID":"526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b","Type":"ContainerDied","Data":"f1af0a240394f894fae9a5fd23fc7032ee9e229b4d63e9bc48dac4fcee06ffdb"} Feb 24 09:25:05 crc kubenswrapper[4822]: I0224 09:25:05.787288 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-w2cjf" event={"ID":"526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b","Type":"ContainerStarted","Data":"f3367a3379faac54dbd65145ab25c5dd04d157e742aec3cdb7cee32eefaec897"} Feb 24 09:25:05 crc kubenswrapper[4822]: I0224 09:25:05.788978 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b9dbaf5c-c271-481f-9cc6-079c9d6256c0","Type":"ContainerStarted","Data":"72f1b537704c8f704c81603aa3914903ab076b9690551c4f67abd0600a3561fd"} Feb 24 09:25:05 crc kubenswrapper[4822]: I0224 09:25:05.790500 4822 generic.go:334] "Generic (PLEG): container finished" podID="165384c0-7e5e-45a4-bdb5-2ab821dd7ff1" containerID="c68164a068d72a9518f8811687a661d103b538d3883d462fbe30d6017e653f61" exitCode=0 Feb 24 09:25:05 crc kubenswrapper[4822]: I0224 09:25:05.790575 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-w86b7" event={"ID":"165384c0-7e5e-45a4-bdb5-2ab821dd7ff1","Type":"ContainerDied","Data":"c68164a068d72a9518f8811687a661d103b538d3883d462fbe30d6017e653f61"} Feb 24 09:25:05 crc kubenswrapper[4822]: I0224 09:25:05.790790 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-w86b7" event={"ID":"165384c0-7e5e-45a4-bdb5-2ab821dd7ff1","Type":"ContainerStarted","Data":"0afdbf885aa4551cbb6efcc83e3a4d5ba7ca38dc4b91ec923003952241caca75"} Feb 24 09:25:05 crc kubenswrapper[4822]: I0224 09:25:05.791132 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-7zz57" podUID="ccca9e57-53c8-4d44-b4e9-2a6b4de5b566" containerName="dnsmasq-dns" containerID="cri-o://5eabb795e7c87b1136eddf139c3adad8c804712f4f7d056ba689b47c86bb98c6" gracePeriod=10 Feb 24 09:25:05 crc kubenswrapper[4822]: I0224 09:25:05.797450 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-6kl2p" podStartSLOduration=1.7974348 podStartE2EDuration="1.7974348s" podCreationTimestamp="2026-02-24 09:25:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:25:05.795019816 +0000 UTC m=+1028.182782384" watchObservedRunningTime="2026-02-24 09:25:05.7974348 +0000 UTC m=+1028.185197348" Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.102304 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-mz7t2"] Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.361337 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2153ae49-dd49-45fa-b8cc-00f2c44f744e" path="/var/lib/kubelet/pods/2153ae49-dd49-45fa-b8cc-00f2c44f744e/volumes" Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.512866 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.518710 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.523278 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.523313 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.523306 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-5qwj6" Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.523392 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.548895 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 24 09:25:06 crc kubenswrapper[4822]: E0224 09:25:06.597599 4822 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 24 09:25:06 crc kubenswrapper[4822]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 24 09:25:06 crc kubenswrapper[4822]: > podSandboxID="f3367a3379faac54dbd65145ab25c5dd04d157e742aec3cdb7cee32eefaec897" Feb 24 09:25:06 crc kubenswrapper[4822]: E0224 09:25:06.597777 4822 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:25:06 crc kubenswrapper[4822]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8ch647h5fdh676h5c8h566h96h5d8hdh569h64dh5b5h587h55h5cch58dh658h67h5f6h64fh648h6h59fh65ch7hf9hf6h74hf8hch596h5b8q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqfzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6bc7876d45-w2cjf_openstack(526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 24 09:25:06 crc kubenswrapper[4822]: > logger="UnhandledError" Feb 24 09:25:06 crc kubenswrapper[4822]: E0224 09:25:06.598865 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-6bc7876d45-w2cjf" podUID="526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b" Feb 24 09:25:06 crc kubenswrapper[4822]: E0224 09:25:06.609650 4822 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 24 09:25:06 crc kubenswrapper[4822]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/165384c0-7e5e-45a4-bdb5-2ab821dd7ff1/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 24 09:25:06 crc kubenswrapper[4822]: > podSandboxID="0afdbf885aa4551cbb6efcc83e3a4d5ba7ca38dc4b91ec923003952241caca75" Feb 24 09:25:06 crc kubenswrapper[4822]: E0224 09:25:06.609828 4822 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:25:06 crc kubenswrapper[4822]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n654h99h64ch5dbh6dh555h587h64bh5cfh647h5fdh57ch679h9h597h5f5hbch59bh54fh575h566h667h586h5f5h65ch5bch57h68h65ch58bh694h5cfq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2qbqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-8554648995-w86b7_openstack(165384c0-7e5e-45a4-bdb5-2ab821dd7ff1): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/165384c0-7e5e-45a4-bdb5-2ab821dd7ff1/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 24 09:25:06 crc kubenswrapper[4822]: > logger="UnhandledError" Feb 24 09:25:06 crc kubenswrapper[4822]: E0224 09:25:06.611084 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/165384c0-7e5e-45a4-bdb5-2ab821dd7ff1/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-8554648995-w86b7" podUID="165384c0-7e5e-45a4-bdb5-2ab821dd7ff1" Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.695747 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0adc7aa-1b44-4523-8103-6a1714eb2432-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"e0adc7aa-1b44-4523-8103-6a1714eb2432\") " pod="openstack/swift-storage-0" Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.695812 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhm5x\" (UniqueName: \"kubernetes.io/projected/e0adc7aa-1b44-4523-8103-6a1714eb2432-kube-api-access-mhm5x\") pod \"swift-storage-0\" (UID: \"e0adc7aa-1b44-4523-8103-6a1714eb2432\") " pod="openstack/swift-storage-0" Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.696294 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e0adc7aa-1b44-4523-8103-6a1714eb2432-etc-swift\") pod \"swift-storage-0\" (UID: \"e0adc7aa-1b44-4523-8103-6a1714eb2432\") " pod="openstack/swift-storage-0" Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.696405 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e0adc7aa-1b44-4523-8103-6a1714eb2432-cache\") pod \"swift-storage-0\" (UID: \"e0adc7aa-1b44-4523-8103-6a1714eb2432\") " pod="openstack/swift-storage-0" Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.696631 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"e0adc7aa-1b44-4523-8103-6a1714eb2432\") " pod="openstack/swift-storage-0" Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.696704 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e0adc7aa-1b44-4523-8103-6a1714eb2432-lock\") pod \"swift-storage-0\" (UID: \"e0adc7aa-1b44-4523-8103-6a1714eb2432\") " pod="openstack/swift-storage-0" Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.749823 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-7zz57" Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.798114 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e0adc7aa-1b44-4523-8103-6a1714eb2432-lock\") pod \"swift-storage-0\" (UID: \"e0adc7aa-1b44-4523-8103-6a1714eb2432\") " pod="openstack/swift-storage-0" Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.798168 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0adc7aa-1b44-4523-8103-6a1714eb2432-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"e0adc7aa-1b44-4523-8103-6a1714eb2432\") " pod="openstack/swift-storage-0" Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.798196 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhm5x\" (UniqueName: \"kubernetes.io/projected/e0adc7aa-1b44-4523-8103-6a1714eb2432-kube-api-access-mhm5x\") pod \"swift-storage-0\" (UID: \"e0adc7aa-1b44-4523-8103-6a1714eb2432\") " pod="openstack/swift-storage-0" Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.798271 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e0adc7aa-1b44-4523-8103-6a1714eb2432-etc-swift\") pod \"swift-storage-0\" (UID: \"e0adc7aa-1b44-4523-8103-6a1714eb2432\") " pod="openstack/swift-storage-0" Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.798292 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e0adc7aa-1b44-4523-8103-6a1714eb2432-cache\") pod \"swift-storage-0\" (UID: \"e0adc7aa-1b44-4523-8103-6a1714eb2432\") " pod="openstack/swift-storage-0" Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.798332 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"e0adc7aa-1b44-4523-8103-6a1714eb2432\") " pod="openstack/swift-storage-0" Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.798608 4822 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"e0adc7aa-1b44-4523-8103-6a1714eb2432\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/swift-storage-0" Feb 24 09:25:06 crc kubenswrapper[4822]: E0224 09:25:06.799076 4822 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 24 09:25:06 crc kubenswrapper[4822]: E0224 09:25:06.799096 4822 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.799112 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e0adc7aa-1b44-4523-8103-6a1714eb2432-lock\") pod \"swift-storage-0\" (UID: \"e0adc7aa-1b44-4523-8103-6a1714eb2432\") " pod="openstack/swift-storage-0" Feb 24 09:25:06 crc kubenswrapper[4822]: E0224 09:25:06.799147 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e0adc7aa-1b44-4523-8103-6a1714eb2432-etc-swift podName:e0adc7aa-1b44-4523-8103-6a1714eb2432 nodeName:}" failed. No retries permitted until 2026-02-24 09:25:07.299127797 +0000 UTC m=+1029.686890345 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e0adc7aa-1b44-4523-8103-6a1714eb2432-etc-swift") pod "swift-storage-0" (UID: "e0adc7aa-1b44-4523-8103-6a1714eb2432") : configmap "swift-ring-files" not found Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.799554 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e0adc7aa-1b44-4523-8103-6a1714eb2432-cache\") pod \"swift-storage-0\" (UID: \"e0adc7aa-1b44-4523-8103-6a1714eb2432\") " pod="openstack/swift-storage-0" Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.801703 4822 generic.go:334] "Generic (PLEG): container finished" podID="00667fc1-3e89-475c-84eb-b89736a8d50e" containerID="0d5003ade6ad1a1ea7b54351b7d3d7d896d1baa6586ceceae0d85541c2b2bae8" exitCode=0 Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.801759 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-mz7t2" event={"ID":"00667fc1-3e89-475c-84eb-b89736a8d50e","Type":"ContainerDied","Data":"0d5003ade6ad1a1ea7b54351b7d3d7d896d1baa6586ceceae0d85541c2b2bae8"} Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.801783 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-mz7t2" event={"ID":"00667fc1-3e89-475c-84eb-b89736a8d50e","Type":"ContainerStarted","Data":"e84db52c7477ee7dde6a73d0586c123b689423afea388bbac75a6a8f1ab30f38"} Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.805317 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0adc7aa-1b44-4523-8103-6a1714eb2432-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"e0adc7aa-1b44-4523-8103-6a1714eb2432\") " pod="openstack/swift-storage-0" Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.808054 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b9dbaf5c-c271-481f-9cc6-079c9d6256c0","Type":"ContainerStarted","Data":"4a8d474853ab857e534086625415522954bd67c3bd9fb25b206977e5f20a60de"} Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.823710 4822 generic.go:334] "Generic (PLEG): container finished" podID="ccca9e57-53c8-4d44-b4e9-2a6b4de5b566" containerID="5eabb795e7c87b1136eddf139c3adad8c804712f4f7d056ba689b47c86bb98c6" exitCode=0 Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.824012 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-7zz57" event={"ID":"ccca9e57-53c8-4d44-b4e9-2a6b4de5b566","Type":"ContainerDied","Data":"5eabb795e7c87b1136eddf139c3adad8c804712f4f7d056ba689b47c86bb98c6"} Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.824068 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-7zz57" event={"ID":"ccca9e57-53c8-4d44-b4e9-2a6b4de5b566","Type":"ContainerDied","Data":"ade4656d8ceca7149d5c0af9aa9e5c15f6d8a3f854427906fc49916cea97277b"} Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.824088 4822 scope.go:117] "RemoveContainer" containerID="5eabb795e7c87b1136eddf139c3adad8c804712f4f7d056ba689b47c86bb98c6" Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.824274 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-7zz57" Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.838423 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhm5x\" (UniqueName: \"kubernetes.io/projected/e0adc7aa-1b44-4523-8103-6a1714eb2432-kube-api-access-mhm5x\") pod \"swift-storage-0\" (UID: \"e0adc7aa-1b44-4523-8103-6a1714eb2432\") " pod="openstack/swift-storage-0" Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.854151 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"e0adc7aa-1b44-4523-8103-6a1714eb2432\") " pod="openstack/swift-storage-0" Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.863429 4822 scope.go:117] "RemoveContainer" containerID="5e6faf88ee5a6c830a862b325ea6e9cee20fb5a7034f1a08021ced78b18c4ea8" Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.875610 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39288: no serving certificate available for the kubelet" Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.897179 4822 scope.go:117] "RemoveContainer" containerID="5eabb795e7c87b1136eddf139c3adad8c804712f4f7d056ba689b47c86bb98c6" Feb 24 09:25:06 crc kubenswrapper[4822]: E0224 09:25:06.897864 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5eabb795e7c87b1136eddf139c3adad8c804712f4f7d056ba689b47c86bb98c6\": container with ID starting with 5eabb795e7c87b1136eddf139c3adad8c804712f4f7d056ba689b47c86bb98c6 not found: ID does not exist" containerID="5eabb795e7c87b1136eddf139c3adad8c804712f4f7d056ba689b47c86bb98c6" Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.897960 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5eabb795e7c87b1136eddf139c3adad8c804712f4f7d056ba689b47c86bb98c6"} err="failed to get container status \"5eabb795e7c87b1136eddf139c3adad8c804712f4f7d056ba689b47c86bb98c6\": rpc error: code = NotFound desc = could not find container \"5eabb795e7c87b1136eddf139c3adad8c804712f4f7d056ba689b47c86bb98c6\": container with ID starting with 5eabb795e7c87b1136eddf139c3adad8c804712f4f7d056ba689b47c86bb98c6 not found: ID does not exist" Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.897989 4822 scope.go:117] "RemoveContainer" containerID="5e6faf88ee5a6c830a862b325ea6e9cee20fb5a7034f1a08021ced78b18c4ea8" Feb 24 09:25:06 crc kubenswrapper[4822]: E0224 09:25:06.900112 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e6faf88ee5a6c830a862b325ea6e9cee20fb5a7034f1a08021ced78b18c4ea8\": container with ID starting with 5e6faf88ee5a6c830a862b325ea6e9cee20fb5a7034f1a08021ced78b18c4ea8 not found: ID does not exist" containerID="5e6faf88ee5a6c830a862b325ea6e9cee20fb5a7034f1a08021ced78b18c4ea8" Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.900168 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e6faf88ee5a6c830a862b325ea6e9cee20fb5a7034f1a08021ced78b18c4ea8"} err="failed to get container status \"5e6faf88ee5a6c830a862b325ea6e9cee20fb5a7034f1a08021ced78b18c4ea8\": rpc error: code = NotFound desc = could not find container \"5e6faf88ee5a6c830a862b325ea6e9cee20fb5a7034f1a08021ced78b18c4ea8\": container with ID starting with 5e6faf88ee5a6c830a862b325ea6e9cee20fb5a7034f1a08021ced78b18c4ea8 not found: ID does not exist" Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.902566 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccca9e57-53c8-4d44-b4e9-2a6b4de5b566-config\") pod \"ccca9e57-53c8-4d44-b4e9-2a6b4de5b566\" (UID: \"ccca9e57-53c8-4d44-b4e9-2a6b4de5b566\") " Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.902750 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46v4r\" (UniqueName: \"kubernetes.io/projected/ccca9e57-53c8-4d44-b4e9-2a6b4de5b566-kube-api-access-46v4r\") pod \"ccca9e57-53c8-4d44-b4e9-2a6b4de5b566\" (UID: \"ccca9e57-53c8-4d44-b4e9-2a6b4de5b566\") " Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.902866 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccca9e57-53c8-4d44-b4e9-2a6b4de5b566-dns-svc\") pod \"ccca9e57-53c8-4d44-b4e9-2a6b4de5b566\" (UID: \"ccca9e57-53c8-4d44-b4e9-2a6b4de5b566\") " Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.909799 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccca9e57-53c8-4d44-b4e9-2a6b4de5b566-kube-api-access-46v4r" (OuterVolumeSpecName: "kube-api-access-46v4r") pod "ccca9e57-53c8-4d44-b4e9-2a6b4de5b566" (UID: "ccca9e57-53c8-4d44-b4e9-2a6b4de5b566"). InnerVolumeSpecName "kube-api-access-46v4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.961546 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccca9e57-53c8-4d44-b4e9-2a6b4de5b566-config" (OuterVolumeSpecName: "config") pod "ccca9e57-53c8-4d44-b4e9-2a6b4de5b566" (UID: "ccca9e57-53c8-4d44-b4e9-2a6b4de5b566"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:25:06 crc kubenswrapper[4822]: I0224 09:25:06.977629 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccca9e57-53c8-4d44-b4e9-2a6b4de5b566-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ccca9e57-53c8-4d44-b4e9-2a6b4de5b566" (UID: "ccca9e57-53c8-4d44-b4e9-2a6b4de5b566"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.005366 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46v4r\" (UniqueName: \"kubernetes.io/projected/ccca9e57-53c8-4d44-b4e9-2a6b4de5b566-kube-api-access-46v4r\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.005409 4822 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccca9e57-53c8-4d44-b4e9-2a6b4de5b566-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.005422 4822 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccca9e57-53c8-4d44-b4e9-2a6b4de5b566-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.059573 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-55sh2"] Feb 24 09:25:07 crc kubenswrapper[4822]: E0224 09:25:07.059886 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccca9e57-53c8-4d44-b4e9-2a6b4de5b566" containerName="init" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.059898 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccca9e57-53c8-4d44-b4e9-2a6b4de5b566" containerName="init" Feb 24 09:25:07 crc kubenswrapper[4822]: E0224 09:25:07.059920 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccca9e57-53c8-4d44-b4e9-2a6b4de5b566" containerName="dnsmasq-dns" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.059926 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccca9e57-53c8-4d44-b4e9-2a6b4de5b566" containerName="dnsmasq-dns" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.060049 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccca9e57-53c8-4d44-b4e9-2a6b4de5b566" containerName="dnsmasq-dns" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.060523 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-55sh2" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.063291 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.063525 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.063558 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.080111 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-55sh2"] Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.217425 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3a3bc793-65c3-4a7b-8db1-4269d70f493a-scripts\") pod \"swift-ring-rebalance-55sh2\" (UID: \"3a3bc793-65c3-4a7b-8db1-4269d70f493a\") " pod="openstack/swift-ring-rebalance-55sh2" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.217458 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3a3bc793-65c3-4a7b-8db1-4269d70f493a-etc-swift\") pod \"swift-ring-rebalance-55sh2\" (UID: \"3a3bc793-65c3-4a7b-8db1-4269d70f493a\") " pod="openstack/swift-ring-rebalance-55sh2" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.217477 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stkjl\" (UniqueName: \"kubernetes.io/projected/3a3bc793-65c3-4a7b-8db1-4269d70f493a-kube-api-access-stkjl\") pod \"swift-ring-rebalance-55sh2\" (UID: \"3a3bc793-65c3-4a7b-8db1-4269d70f493a\") " pod="openstack/swift-ring-rebalance-55sh2" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.217519 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3a3bc793-65c3-4a7b-8db1-4269d70f493a-dispersionconf\") pod \"swift-ring-rebalance-55sh2\" (UID: \"3a3bc793-65c3-4a7b-8db1-4269d70f493a\") " pod="openstack/swift-ring-rebalance-55sh2" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.217533 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3a3bc793-65c3-4a7b-8db1-4269d70f493a-ring-data-devices\") pod \"swift-ring-rebalance-55sh2\" (UID: \"3a3bc793-65c3-4a7b-8db1-4269d70f493a\") " pod="openstack/swift-ring-rebalance-55sh2" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.217558 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a3bc793-65c3-4a7b-8db1-4269d70f493a-combined-ca-bundle\") pod \"swift-ring-rebalance-55sh2\" (UID: \"3a3bc793-65c3-4a7b-8db1-4269d70f493a\") " pod="openstack/swift-ring-rebalance-55sh2" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.217578 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3a3bc793-65c3-4a7b-8db1-4269d70f493a-swiftconf\") pod \"swift-ring-rebalance-55sh2\" (UID: \"3a3bc793-65c3-4a7b-8db1-4269d70f493a\") " pod="openstack/swift-ring-rebalance-55sh2" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.247518 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-w2cjf" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.251735 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7zz57"] Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.265176 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7zz57"] Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.318747 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b-dns-svc\") pod \"526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b\" (UID: \"526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b\") " Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.318985 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b-ovsdbserver-sb\") pod \"526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b\" (UID: \"526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b\") " Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.319042 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b-config\") pod \"526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b\" (UID: \"526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b\") " Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.319071 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqfzw\" (UniqueName: \"kubernetes.io/projected/526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b-kube-api-access-wqfzw\") pod \"526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b\" (UID: \"526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b\") " Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.319353 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e0adc7aa-1b44-4523-8103-6a1714eb2432-etc-swift\") pod \"swift-storage-0\" (UID: \"e0adc7aa-1b44-4523-8103-6a1714eb2432\") " pod="openstack/swift-storage-0" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.319382 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3a3bc793-65c3-4a7b-8db1-4269d70f493a-dispersionconf\") pod \"swift-ring-rebalance-55sh2\" (UID: \"3a3bc793-65c3-4a7b-8db1-4269d70f493a\") " pod="openstack/swift-ring-rebalance-55sh2" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.319401 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3a3bc793-65c3-4a7b-8db1-4269d70f493a-ring-data-devices\") pod \"swift-ring-rebalance-55sh2\" (UID: \"3a3bc793-65c3-4a7b-8db1-4269d70f493a\") " pod="openstack/swift-ring-rebalance-55sh2" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.319441 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a3bc793-65c3-4a7b-8db1-4269d70f493a-combined-ca-bundle\") pod \"swift-ring-rebalance-55sh2\" (UID: \"3a3bc793-65c3-4a7b-8db1-4269d70f493a\") " pod="openstack/swift-ring-rebalance-55sh2" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.320478 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3a3bc793-65c3-4a7b-8db1-4269d70f493a-swiftconf\") pod \"swift-ring-rebalance-55sh2\" (UID: \"3a3bc793-65c3-4a7b-8db1-4269d70f493a\") " pod="openstack/swift-ring-rebalance-55sh2" Feb 24 09:25:07 crc kubenswrapper[4822]: E0224 09:25:07.319542 4822 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 24 09:25:07 crc kubenswrapper[4822]: E0224 09:25:07.320525 4822 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 24 09:25:07 crc kubenswrapper[4822]: E0224 09:25:07.320578 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e0adc7aa-1b44-4523-8103-6a1714eb2432-etc-swift podName:e0adc7aa-1b44-4523-8103-6a1714eb2432 nodeName:}" failed. No retries permitted until 2026-02-24 09:25:08.320561778 +0000 UTC m=+1030.708324326 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e0adc7aa-1b44-4523-8103-6a1714eb2432-etc-swift") pod "swift-storage-0" (UID: "e0adc7aa-1b44-4523-8103-6a1714eb2432") : configmap "swift-ring-files" not found Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.320395 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3a3bc793-65c3-4a7b-8db1-4269d70f493a-ring-data-devices\") pod \"swift-ring-rebalance-55sh2\" (UID: \"3a3bc793-65c3-4a7b-8db1-4269d70f493a\") " pod="openstack/swift-ring-rebalance-55sh2" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.321102 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3a3bc793-65c3-4a7b-8db1-4269d70f493a-scripts\") pod \"swift-ring-rebalance-55sh2\" (UID: \"3a3bc793-65c3-4a7b-8db1-4269d70f493a\") " pod="openstack/swift-ring-rebalance-55sh2" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.321694 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3a3bc793-65c3-4a7b-8db1-4269d70f493a-scripts\") pod \"swift-ring-rebalance-55sh2\" (UID: \"3a3bc793-65c3-4a7b-8db1-4269d70f493a\") " pod="openstack/swift-ring-rebalance-55sh2" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.322031 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3a3bc793-65c3-4a7b-8db1-4269d70f493a-etc-swift\") pod \"swift-ring-rebalance-55sh2\" (UID: \"3a3bc793-65c3-4a7b-8db1-4269d70f493a\") " pod="openstack/swift-ring-rebalance-55sh2" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.325481 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3a3bc793-65c3-4a7b-8db1-4269d70f493a-etc-swift\") pod \"swift-ring-rebalance-55sh2\" (UID: \"3a3bc793-65c3-4a7b-8db1-4269d70f493a\") " pod="openstack/swift-ring-rebalance-55sh2" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.325552 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stkjl\" (UniqueName: \"kubernetes.io/projected/3a3bc793-65c3-4a7b-8db1-4269d70f493a-kube-api-access-stkjl\") pod \"swift-ring-rebalance-55sh2\" (UID: \"3a3bc793-65c3-4a7b-8db1-4269d70f493a\") " pod="openstack/swift-ring-rebalance-55sh2" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.327405 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a3bc793-65c3-4a7b-8db1-4269d70f493a-combined-ca-bundle\") pod \"swift-ring-rebalance-55sh2\" (UID: \"3a3bc793-65c3-4a7b-8db1-4269d70f493a\") " pod="openstack/swift-ring-rebalance-55sh2" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.345237 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b-kube-api-access-wqfzw" (OuterVolumeSpecName: "kube-api-access-wqfzw") pod "526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b" (UID: "526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b"). InnerVolumeSpecName "kube-api-access-wqfzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.345736 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3a3bc793-65c3-4a7b-8db1-4269d70f493a-swiftconf\") pod \"swift-ring-rebalance-55sh2\" (UID: \"3a3bc793-65c3-4a7b-8db1-4269d70f493a\") " pod="openstack/swift-ring-rebalance-55sh2" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.348446 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3a3bc793-65c3-4a7b-8db1-4269d70f493a-dispersionconf\") pod \"swift-ring-rebalance-55sh2\" (UID: \"3a3bc793-65c3-4a7b-8db1-4269d70f493a\") " pod="openstack/swift-ring-rebalance-55sh2" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.356451 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stkjl\" (UniqueName: \"kubernetes.io/projected/3a3bc793-65c3-4a7b-8db1-4269d70f493a-kube-api-access-stkjl\") pod \"swift-ring-rebalance-55sh2\" (UID: \"3a3bc793-65c3-4a7b-8db1-4269d70f493a\") " pod="openstack/swift-ring-rebalance-55sh2" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.372502 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b-config" (OuterVolumeSpecName: "config") pod "526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b" (UID: "526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.375278 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b" (UID: "526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.410473 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b" (UID: "526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.431707 4822 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.431835 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqfzw\" (UniqueName: \"kubernetes.io/projected/526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b-kube-api-access-wqfzw\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.431952 4822 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.432032 4822 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.517113 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-55sh2" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.830312 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-w2cjf" event={"ID":"526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b","Type":"ContainerDied","Data":"f3367a3379faac54dbd65145ab25c5dd04d157e742aec3cdb7cee32eefaec897"} Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.830333 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-w2cjf" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.830577 4822 scope.go:117] "RemoveContainer" containerID="f1af0a240394f894fae9a5fd23fc7032ee9e229b4d63e9bc48dac4fcee06ffdb" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.832021 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-mz7t2" event={"ID":"00667fc1-3e89-475c-84eb-b89736a8d50e","Type":"ContainerStarted","Data":"06f06b125bae00f1d68ba91e47bb7a451f291230f68bf1716738ae5bd99a1c5e"} Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.832716 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-mz7t2" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.834576 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b9dbaf5c-c271-481f-9cc6-079c9d6256c0","Type":"ContainerStarted","Data":"018518125cb65ef67bf74b22c2c11b573fa1c66c30a42e99ebb83d6419f478bb"} Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.834791 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.851897 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-mz7t2" podStartSLOduration=2.851882319 podStartE2EDuration="2.851882319s" podCreationTimestamp="2026-02-24 09:25:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:25:07.849369823 +0000 UTC m=+1030.237132381" watchObservedRunningTime="2026-02-24 09:25:07.851882319 +0000 UTC m=+1030.239644877" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.897576 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.6243632310000002 podStartE2EDuration="3.897549422s" podCreationTimestamp="2026-02-24 09:25:04 +0000 UTC" firstStartedPulling="2026-02-24 09:25:05.104238443 +0000 UTC m=+1027.492000991" lastFinishedPulling="2026-02-24 09:25:06.377424624 +0000 UTC m=+1028.765187182" observedRunningTime="2026-02-24 09:25:07.878213803 +0000 UTC m=+1030.265976361" watchObservedRunningTime="2026-02-24 09:25:07.897549422 +0000 UTC m=+1030.285311980" Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.919568 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-w2cjf"] Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.922832 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-w2cjf"] Feb 24 09:25:07 crc kubenswrapper[4822]: I0224 09:25:07.961905 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-55sh2"] Feb 24 09:25:08 crc kubenswrapper[4822]: I0224 09:25:08.350724 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e0adc7aa-1b44-4523-8103-6a1714eb2432-etc-swift\") pod \"swift-storage-0\" (UID: \"e0adc7aa-1b44-4523-8103-6a1714eb2432\") " pod="openstack/swift-storage-0" Feb 24 09:25:08 crc kubenswrapper[4822]: E0224 09:25:08.351131 4822 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 24 09:25:08 crc kubenswrapper[4822]: E0224 09:25:08.351177 4822 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 24 09:25:08 crc kubenswrapper[4822]: E0224 09:25:08.351255 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e0adc7aa-1b44-4523-8103-6a1714eb2432-etc-swift podName:e0adc7aa-1b44-4523-8103-6a1714eb2432 nodeName:}" failed. No retries permitted until 2026-02-24 09:25:10.351230918 +0000 UTC m=+1032.738993516 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e0adc7aa-1b44-4523-8103-6a1714eb2432-etc-swift") pod "swift-storage-0" (UID: "e0adc7aa-1b44-4523-8103-6a1714eb2432") : configmap "swift-ring-files" not found Feb 24 09:25:08 crc kubenswrapper[4822]: I0224 09:25:08.354779 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b" path="/var/lib/kubelet/pods/526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b/volumes" Feb 24 09:25:08 crc kubenswrapper[4822]: I0224 09:25:08.356906 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccca9e57-53c8-4d44-b4e9-2a6b4de5b566" path="/var/lib/kubelet/pods/ccca9e57-53c8-4d44-b4e9-2a6b4de5b566/volumes" Feb 24 09:25:08 crc kubenswrapper[4822]: I0224 09:25:08.448271 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39304: no serving certificate available for the kubelet" Feb 24 09:25:08 crc kubenswrapper[4822]: I0224 09:25:08.848319 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-w86b7" event={"ID":"165384c0-7e5e-45a4-bdb5-2ab821dd7ff1","Type":"ContainerStarted","Data":"9b35c64bf97dd45d6e1923f4b37d5418708bfe8bf039a6526ddd1219a55d6441"} Feb 24 09:25:08 crc kubenswrapper[4822]: I0224 09:25:08.848544 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-w86b7" Feb 24 09:25:08 crc kubenswrapper[4822]: I0224 09:25:08.850335 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-55sh2" event={"ID":"3a3bc793-65c3-4a7b-8db1-4269d70f493a","Type":"ContainerStarted","Data":"f8aa58bfd0c9e510dcd3c740a890a9933ab86805a863d6222a0d6b99835f147a"} Feb 24 09:25:08 crc kubenswrapper[4822]: I0224 09:25:08.872973 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-w86b7" podStartSLOduration=4.872957566 podStartE2EDuration="4.872957566s" podCreationTimestamp="2026-02-24 09:25:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:25:08.867319928 +0000 UTC m=+1031.255082476" watchObservedRunningTime="2026-02-24 09:25:08.872957566 +0000 UTC m=+1031.260720114" Feb 24 09:25:09 crc kubenswrapper[4822]: I0224 09:25:09.911314 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39312: no serving certificate available for the kubelet" Feb 24 09:25:10 crc kubenswrapper[4822]: I0224 09:25:10.390094 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e0adc7aa-1b44-4523-8103-6a1714eb2432-etc-swift\") pod \"swift-storage-0\" (UID: \"e0adc7aa-1b44-4523-8103-6a1714eb2432\") " pod="openstack/swift-storage-0" Feb 24 09:25:10 crc kubenswrapper[4822]: E0224 09:25:10.390789 4822 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 24 09:25:10 crc kubenswrapper[4822]: E0224 09:25:10.391005 4822 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 24 09:25:10 crc kubenswrapper[4822]: E0224 09:25:10.391180 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e0adc7aa-1b44-4523-8103-6a1714eb2432-etc-swift podName:e0adc7aa-1b44-4523-8103-6a1714eb2432 nodeName:}" failed. No retries permitted until 2026-02-24 09:25:14.391154554 +0000 UTC m=+1036.778917142 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e0adc7aa-1b44-4523-8103-6a1714eb2432-etc-swift") pod "swift-storage-0" (UID: "e0adc7aa-1b44-4523-8103-6a1714eb2432") : configmap "swift-ring-files" not found Feb 24 09:25:11 crc kubenswrapper[4822]: I0224 09:25:11.490301 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38630: no serving certificate available for the kubelet" Feb 24 09:25:11 crc kubenswrapper[4822]: I0224 09:25:11.616692 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 24 09:25:11 crc kubenswrapper[4822]: I0224 09:25:11.616791 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 24 09:25:11 crc kubenswrapper[4822]: I0224 09:25:11.877455 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-55sh2" event={"ID":"3a3bc793-65c3-4a7b-8db1-4269d70f493a","Type":"ContainerStarted","Data":"902eb9845f7f158d118cc3418d18722ce9c472f0d93a359ae5f4048c8718002e"} Feb 24 09:25:11 crc kubenswrapper[4822]: I0224 09:25:11.913579 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-55sh2" podStartSLOduration=1.634267845 podStartE2EDuration="4.913555792s" podCreationTimestamp="2026-02-24 09:25:07 +0000 UTC" firstStartedPulling="2026-02-24 09:25:07.968136382 +0000 UTC m=+1030.355898930" lastFinishedPulling="2026-02-24 09:25:11.247424299 +0000 UTC m=+1033.635186877" observedRunningTime="2026-02-24 09:25:11.901534535 +0000 UTC m=+1034.289297113" watchObservedRunningTime="2026-02-24 09:25:11.913555792 +0000 UTC m=+1034.301318370" Feb 24 09:25:12 crc kubenswrapper[4822]: I0224 09:25:12.963564 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38634: no serving certificate available for the kubelet" Feb 24 09:25:14 crc kubenswrapper[4822]: I0224 09:25:14.464767 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e0adc7aa-1b44-4523-8103-6a1714eb2432-etc-swift\") pod \"swift-storage-0\" (UID: \"e0adc7aa-1b44-4523-8103-6a1714eb2432\") " pod="openstack/swift-storage-0" Feb 24 09:25:14 crc kubenswrapper[4822]: E0224 09:25:14.464905 4822 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 24 09:25:14 crc kubenswrapper[4822]: E0224 09:25:14.467270 4822 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 24 09:25:14 crc kubenswrapper[4822]: E0224 09:25:14.467346 4822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e0adc7aa-1b44-4523-8103-6a1714eb2432-etc-swift podName:e0adc7aa-1b44-4523-8103-6a1714eb2432 nodeName:}" failed. No retries permitted until 2026-02-24 09:25:22.467320359 +0000 UTC m=+1044.855082947 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e0adc7aa-1b44-4523-8103-6a1714eb2432-etc-swift") pod "swift-storage-0" (UID: "e0adc7aa-1b44-4523-8103-6a1714eb2432") : configmap "swift-ring-files" not found Feb 24 09:25:14 crc kubenswrapper[4822]: I0224 09:25:14.537677 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38644: no serving certificate available for the kubelet" Feb 24 09:25:14 crc kubenswrapper[4822]: I0224 09:25:14.545215 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-w86b7" Feb 24 09:25:15 crc kubenswrapper[4822]: I0224 09:25:15.755197 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-mz7t2" Feb 24 09:25:15 crc kubenswrapper[4822]: I0224 09:25:15.856563 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-w86b7"] Feb 24 09:25:15 crc kubenswrapper[4822]: I0224 09:25:15.856862 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-w86b7" podUID="165384c0-7e5e-45a4-bdb5-2ab821dd7ff1" containerName="dnsmasq-dns" containerID="cri-o://9b35c64bf97dd45d6e1923f4b37d5418708bfe8bf039a6526ddd1219a55d6441" gracePeriod=10 Feb 24 09:25:16 crc kubenswrapper[4822]: I0224 09:25:16.010861 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38650: no serving certificate available for the kubelet" Feb 24 09:25:16 crc kubenswrapper[4822]: I0224 09:25:16.294210 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-w86b7" Feb 24 09:25:16 crc kubenswrapper[4822]: I0224 09:25:16.410372 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/165384c0-7e5e-45a4-bdb5-2ab821dd7ff1-ovsdbserver-sb\") pod \"165384c0-7e5e-45a4-bdb5-2ab821dd7ff1\" (UID: \"165384c0-7e5e-45a4-bdb5-2ab821dd7ff1\") " Feb 24 09:25:16 crc kubenswrapper[4822]: I0224 09:25:16.410411 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qbqb\" (UniqueName: \"kubernetes.io/projected/165384c0-7e5e-45a4-bdb5-2ab821dd7ff1-kube-api-access-2qbqb\") pod \"165384c0-7e5e-45a4-bdb5-2ab821dd7ff1\" (UID: \"165384c0-7e5e-45a4-bdb5-2ab821dd7ff1\") " Feb 24 09:25:16 crc kubenswrapper[4822]: I0224 09:25:16.410472 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/165384c0-7e5e-45a4-bdb5-2ab821dd7ff1-ovsdbserver-nb\") pod \"165384c0-7e5e-45a4-bdb5-2ab821dd7ff1\" (UID: \"165384c0-7e5e-45a4-bdb5-2ab821dd7ff1\") " Feb 24 09:25:16 crc kubenswrapper[4822]: I0224 09:25:16.410530 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/165384c0-7e5e-45a4-bdb5-2ab821dd7ff1-dns-svc\") pod \"165384c0-7e5e-45a4-bdb5-2ab821dd7ff1\" (UID: \"165384c0-7e5e-45a4-bdb5-2ab821dd7ff1\") " Feb 24 09:25:16 crc kubenswrapper[4822]: I0224 09:25:16.410576 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/165384c0-7e5e-45a4-bdb5-2ab821dd7ff1-config\") pod \"165384c0-7e5e-45a4-bdb5-2ab821dd7ff1\" (UID: \"165384c0-7e5e-45a4-bdb5-2ab821dd7ff1\") " Feb 24 09:25:16 crc kubenswrapper[4822]: I0224 09:25:16.416193 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/165384c0-7e5e-45a4-bdb5-2ab821dd7ff1-kube-api-access-2qbqb" (OuterVolumeSpecName: "kube-api-access-2qbqb") pod "165384c0-7e5e-45a4-bdb5-2ab821dd7ff1" (UID: "165384c0-7e5e-45a4-bdb5-2ab821dd7ff1"). InnerVolumeSpecName "kube-api-access-2qbqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:25:16 crc kubenswrapper[4822]: I0224 09:25:16.445989 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/165384c0-7e5e-45a4-bdb5-2ab821dd7ff1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "165384c0-7e5e-45a4-bdb5-2ab821dd7ff1" (UID: "165384c0-7e5e-45a4-bdb5-2ab821dd7ff1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:25:16 crc kubenswrapper[4822]: I0224 09:25:16.449788 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/165384c0-7e5e-45a4-bdb5-2ab821dd7ff1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "165384c0-7e5e-45a4-bdb5-2ab821dd7ff1" (UID: "165384c0-7e5e-45a4-bdb5-2ab821dd7ff1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:25:16 crc kubenswrapper[4822]: I0224 09:25:16.466054 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/165384c0-7e5e-45a4-bdb5-2ab821dd7ff1-config" (OuterVolumeSpecName: "config") pod "165384c0-7e5e-45a4-bdb5-2ab821dd7ff1" (UID: "165384c0-7e5e-45a4-bdb5-2ab821dd7ff1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:25:16 crc kubenswrapper[4822]: I0224 09:25:16.466140 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/165384c0-7e5e-45a4-bdb5-2ab821dd7ff1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "165384c0-7e5e-45a4-bdb5-2ab821dd7ff1" (UID: "165384c0-7e5e-45a4-bdb5-2ab821dd7ff1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:25:16 crc kubenswrapper[4822]: I0224 09:25:16.512694 4822 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/165384c0-7e5e-45a4-bdb5-2ab821dd7ff1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:16 crc kubenswrapper[4822]: I0224 09:25:16.512721 4822 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/165384c0-7e5e-45a4-bdb5-2ab821dd7ff1-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:16 crc kubenswrapper[4822]: I0224 09:25:16.512731 4822 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/165384c0-7e5e-45a4-bdb5-2ab821dd7ff1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:16 crc kubenswrapper[4822]: I0224 09:25:16.512740 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qbqb\" (UniqueName: \"kubernetes.io/projected/165384c0-7e5e-45a4-bdb5-2ab821dd7ff1-kube-api-access-2qbqb\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:16 crc kubenswrapper[4822]: I0224 09:25:16.512749 4822 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/165384c0-7e5e-45a4-bdb5-2ab821dd7ff1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:16 crc kubenswrapper[4822]: I0224 09:25:16.926383 4822 generic.go:334] "Generic (PLEG): container finished" podID="165384c0-7e5e-45a4-bdb5-2ab821dd7ff1" containerID="9b35c64bf97dd45d6e1923f4b37d5418708bfe8bf039a6526ddd1219a55d6441" exitCode=0 Feb 24 09:25:16 crc kubenswrapper[4822]: I0224 09:25:16.926483 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-w86b7" event={"ID":"165384c0-7e5e-45a4-bdb5-2ab821dd7ff1","Type":"ContainerDied","Data":"9b35c64bf97dd45d6e1923f4b37d5418708bfe8bf039a6526ddd1219a55d6441"} Feb 24 09:25:16 crc kubenswrapper[4822]: I0224 09:25:16.926524 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-w86b7" Feb 24 09:25:16 crc kubenswrapper[4822]: I0224 09:25:16.926579 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-w86b7" event={"ID":"165384c0-7e5e-45a4-bdb5-2ab821dd7ff1","Type":"ContainerDied","Data":"0afdbf885aa4551cbb6efcc83e3a4d5ba7ca38dc4b91ec923003952241caca75"} Feb 24 09:25:16 crc kubenswrapper[4822]: I0224 09:25:16.926647 4822 scope.go:117] "RemoveContainer" containerID="9b35c64bf97dd45d6e1923f4b37d5418708bfe8bf039a6526ddd1219a55d6441" Feb 24 09:25:16 crc kubenswrapper[4822]: I0224 09:25:16.962044 4822 scope.go:117] "RemoveContainer" containerID="c68164a068d72a9518f8811687a661d103b538d3883d462fbe30d6017e653f61" Feb 24 09:25:16 crc kubenswrapper[4822]: I0224 09:25:16.980186 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-w86b7"] Feb 24 09:25:16 crc kubenswrapper[4822]: I0224 09:25:16.988452 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-w86b7"] Feb 24 09:25:17 crc kubenswrapper[4822]: I0224 09:25:17.008825 4822 scope.go:117] "RemoveContainer" containerID="9b35c64bf97dd45d6e1923f4b37d5418708bfe8bf039a6526ddd1219a55d6441" Feb 24 09:25:17 crc kubenswrapper[4822]: E0224 09:25:17.009373 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b35c64bf97dd45d6e1923f4b37d5418708bfe8bf039a6526ddd1219a55d6441\": container with ID starting with 9b35c64bf97dd45d6e1923f4b37d5418708bfe8bf039a6526ddd1219a55d6441 not found: ID does not exist" containerID="9b35c64bf97dd45d6e1923f4b37d5418708bfe8bf039a6526ddd1219a55d6441" Feb 24 09:25:17 crc kubenswrapper[4822]: I0224 09:25:17.009422 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b35c64bf97dd45d6e1923f4b37d5418708bfe8bf039a6526ddd1219a55d6441"} err="failed to get container status \"9b35c64bf97dd45d6e1923f4b37d5418708bfe8bf039a6526ddd1219a55d6441\": rpc error: code = NotFound desc = could not find container \"9b35c64bf97dd45d6e1923f4b37d5418708bfe8bf039a6526ddd1219a55d6441\": container with ID starting with 9b35c64bf97dd45d6e1923f4b37d5418708bfe8bf039a6526ddd1219a55d6441 not found: ID does not exist" Feb 24 09:25:17 crc kubenswrapper[4822]: I0224 09:25:17.009441 4822 scope.go:117] "RemoveContainer" containerID="c68164a068d72a9518f8811687a661d103b538d3883d462fbe30d6017e653f61" Feb 24 09:25:17 crc kubenswrapper[4822]: E0224 09:25:17.009761 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c68164a068d72a9518f8811687a661d103b538d3883d462fbe30d6017e653f61\": container with ID starting with c68164a068d72a9518f8811687a661d103b538d3883d462fbe30d6017e653f61 not found: ID does not exist" containerID="c68164a068d72a9518f8811687a661d103b538d3883d462fbe30d6017e653f61" Feb 24 09:25:17 crc kubenswrapper[4822]: I0224 09:25:17.009836 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c68164a068d72a9518f8811687a661d103b538d3883d462fbe30d6017e653f61"} err="failed to get container status \"c68164a068d72a9518f8811687a661d103b538d3883d462fbe30d6017e653f61\": rpc error: code = NotFound desc = could not find container \"c68164a068d72a9518f8811687a661d103b538d3883d462fbe30d6017e653f61\": container with ID starting with c68164a068d72a9518f8811687a661d103b538d3883d462fbe30d6017e653f61 not found: ID does not exist" Feb 24 09:25:17 crc kubenswrapper[4822]: I0224 09:25:17.597250 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38662: no serving certificate available for the kubelet" Feb 24 09:25:18 crc kubenswrapper[4822]: I0224 09:25:18.349613 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="165384c0-7e5e-45a4-bdb5-2ab821dd7ff1" path="/var/lib/kubelet/pods/165384c0-7e5e-45a4-bdb5-2ab821dd7ff1/volumes" Feb 24 09:25:18 crc kubenswrapper[4822]: I0224 09:25:18.956347 4822 generic.go:334] "Generic (PLEG): container finished" podID="3a3bc793-65c3-4a7b-8db1-4269d70f493a" containerID="902eb9845f7f158d118cc3418d18722ce9c472f0d93a359ae5f4048c8718002e" exitCode=0 Feb 24 09:25:18 crc kubenswrapper[4822]: I0224 09:25:18.956506 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-55sh2" event={"ID":"3a3bc793-65c3-4a7b-8db1-4269d70f493a","Type":"ContainerDied","Data":"902eb9845f7f158d118cc3418d18722ce9c472f0d93a359ae5f4048c8718002e"} Feb 24 09:25:19 crc kubenswrapper[4822]: I0224 09:25:19.057901 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38670: no serving certificate available for the kubelet" Feb 24 09:25:19 crc kubenswrapper[4822]: I0224 09:25:19.970215 4822 generic.go:334] "Generic (PLEG): container finished" podID="f5d244a0-7be8-4ea4-b8aa-d4d461cb1146" containerID="a4482afe55783bdd9eb8f679ae8239fb65c96f27296c7b916c66bc00a02bf402" exitCode=0 Feb 24 09:25:19 crc kubenswrapper[4822]: I0224 09:25:19.970355 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f5d244a0-7be8-4ea4-b8aa-d4d461cb1146","Type":"ContainerDied","Data":"a4482afe55783bdd9eb8f679ae8239fb65c96f27296c7b916c66bc00a02bf402"} Feb 24 09:25:19 crc kubenswrapper[4822]: I0224 09:25:19.972950 4822 generic.go:334] "Generic (PLEG): container finished" podID="ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11" containerID="e6bd616cd1448d22f313e38f220996445a159a52a7329544e4c3432bfe2c47a0" exitCode=0 Feb 24 09:25:19 crc kubenswrapper[4822]: I0224 09:25:19.973050 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11","Type":"ContainerDied","Data":"e6bd616cd1448d22f313e38f220996445a159a52a7329544e4c3432bfe2c47a0"} Feb 24 09:25:20 crc kubenswrapper[4822]: I0224 09:25:20.351682 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-55sh2" Feb 24 09:25:20 crc kubenswrapper[4822]: I0224 09:25:20.495576 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3a3bc793-65c3-4a7b-8db1-4269d70f493a-scripts\") pod \"3a3bc793-65c3-4a7b-8db1-4269d70f493a\" (UID: \"3a3bc793-65c3-4a7b-8db1-4269d70f493a\") " Feb 24 09:25:20 crc kubenswrapper[4822]: I0224 09:25:20.495640 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3a3bc793-65c3-4a7b-8db1-4269d70f493a-swiftconf\") pod \"3a3bc793-65c3-4a7b-8db1-4269d70f493a\" (UID: \"3a3bc793-65c3-4a7b-8db1-4269d70f493a\") " Feb 24 09:25:20 crc kubenswrapper[4822]: I0224 09:25:20.495712 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3a3bc793-65c3-4a7b-8db1-4269d70f493a-ring-data-devices\") pod \"3a3bc793-65c3-4a7b-8db1-4269d70f493a\" (UID: \"3a3bc793-65c3-4a7b-8db1-4269d70f493a\") " Feb 24 09:25:20 crc kubenswrapper[4822]: I0224 09:25:20.495760 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3a3bc793-65c3-4a7b-8db1-4269d70f493a-dispersionconf\") pod \"3a3bc793-65c3-4a7b-8db1-4269d70f493a\" (UID: \"3a3bc793-65c3-4a7b-8db1-4269d70f493a\") " Feb 24 09:25:20 crc kubenswrapper[4822]: I0224 09:25:20.495819 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a3bc793-65c3-4a7b-8db1-4269d70f493a-combined-ca-bundle\") pod \"3a3bc793-65c3-4a7b-8db1-4269d70f493a\" (UID: \"3a3bc793-65c3-4a7b-8db1-4269d70f493a\") " Feb 24 09:25:20 crc kubenswrapper[4822]: I0224 09:25:20.495874 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3a3bc793-65c3-4a7b-8db1-4269d70f493a-etc-swift\") pod \"3a3bc793-65c3-4a7b-8db1-4269d70f493a\" (UID: \"3a3bc793-65c3-4a7b-8db1-4269d70f493a\") " Feb 24 09:25:20 crc kubenswrapper[4822]: I0224 09:25:20.495972 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stkjl\" (UniqueName: \"kubernetes.io/projected/3a3bc793-65c3-4a7b-8db1-4269d70f493a-kube-api-access-stkjl\") pod \"3a3bc793-65c3-4a7b-8db1-4269d70f493a\" (UID: \"3a3bc793-65c3-4a7b-8db1-4269d70f493a\") " Feb 24 09:25:20 crc kubenswrapper[4822]: I0224 09:25:20.496285 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a3bc793-65c3-4a7b-8db1-4269d70f493a-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "3a3bc793-65c3-4a7b-8db1-4269d70f493a" (UID: "3a3bc793-65c3-4a7b-8db1-4269d70f493a"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:25:20 crc kubenswrapper[4822]: I0224 09:25:20.496718 4822 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3a3bc793-65c3-4a7b-8db1-4269d70f493a-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:20 crc kubenswrapper[4822]: I0224 09:25:20.497063 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a3bc793-65c3-4a7b-8db1-4269d70f493a-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "3a3bc793-65c3-4a7b-8db1-4269d70f493a" (UID: "3a3bc793-65c3-4a7b-8db1-4269d70f493a"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:25:20 crc kubenswrapper[4822]: I0224 09:25:20.502918 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a3bc793-65c3-4a7b-8db1-4269d70f493a-kube-api-access-stkjl" (OuterVolumeSpecName: "kube-api-access-stkjl") pod "3a3bc793-65c3-4a7b-8db1-4269d70f493a" (UID: "3a3bc793-65c3-4a7b-8db1-4269d70f493a"). InnerVolumeSpecName "kube-api-access-stkjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:25:20 crc kubenswrapper[4822]: I0224 09:25:20.506162 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a3bc793-65c3-4a7b-8db1-4269d70f493a-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "3a3bc793-65c3-4a7b-8db1-4269d70f493a" (UID: "3a3bc793-65c3-4a7b-8db1-4269d70f493a"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:25:20 crc kubenswrapper[4822]: I0224 09:25:20.521017 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a3bc793-65c3-4a7b-8db1-4269d70f493a-scripts" (OuterVolumeSpecName: "scripts") pod "3a3bc793-65c3-4a7b-8db1-4269d70f493a" (UID: "3a3bc793-65c3-4a7b-8db1-4269d70f493a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:25:20 crc kubenswrapper[4822]: I0224 09:25:20.525024 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a3bc793-65c3-4a7b-8db1-4269d70f493a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a3bc793-65c3-4a7b-8db1-4269d70f493a" (UID: "3a3bc793-65c3-4a7b-8db1-4269d70f493a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:25:20 crc kubenswrapper[4822]: I0224 09:25:20.534422 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a3bc793-65c3-4a7b-8db1-4269d70f493a-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "3a3bc793-65c3-4a7b-8db1-4269d70f493a" (UID: "3a3bc793-65c3-4a7b-8db1-4269d70f493a"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:25:20 crc kubenswrapper[4822]: I0224 09:25:20.598546 4822 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3a3bc793-65c3-4a7b-8db1-4269d70f493a-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:20 crc kubenswrapper[4822]: I0224 09:25:20.598579 4822 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3a3bc793-65c3-4a7b-8db1-4269d70f493a-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:20 crc kubenswrapper[4822]: I0224 09:25:20.598589 4822 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3a3bc793-65c3-4a7b-8db1-4269d70f493a-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:20 crc kubenswrapper[4822]: I0224 09:25:20.598600 4822 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a3bc793-65c3-4a7b-8db1-4269d70f493a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:20 crc kubenswrapper[4822]: I0224 09:25:20.598609 4822 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3a3bc793-65c3-4a7b-8db1-4269d70f493a-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:20 crc kubenswrapper[4822]: I0224 09:25:20.598617 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stkjl\" (UniqueName: \"kubernetes.io/projected/3a3bc793-65c3-4a7b-8db1-4269d70f493a-kube-api-access-stkjl\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:20 crc kubenswrapper[4822]: I0224 09:25:20.633016 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38676: no serving certificate available for the kubelet" Feb 24 09:25:20 crc kubenswrapper[4822]: I0224 09:25:20.980776 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-55sh2" event={"ID":"3a3bc793-65c3-4a7b-8db1-4269d70f493a","Type":"ContainerDied","Data":"f8aa58bfd0c9e510dcd3c740a890a9933ab86805a863d6222a0d6b99835f147a"} Feb 24 09:25:20 crc kubenswrapper[4822]: I0224 09:25:20.980815 4822 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8aa58bfd0c9e510dcd3c740a890a9933ab86805a863d6222a0d6b99835f147a" Feb 24 09:25:20 crc kubenswrapper[4822]: I0224 09:25:20.980795 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-55sh2" Feb 24 09:25:20 crc kubenswrapper[4822]: I0224 09:25:20.983494 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ae190e3e-e33b-40dd-b0fe-86cd0aa6fe11","Type":"ContainerStarted","Data":"cd658e10d18e918c431cdc4693631f7b28aaf69cd01d596d87bf86b24c240083"} Feb 24 09:25:20 crc kubenswrapper[4822]: I0224 09:25:20.985166 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 24 09:25:20 crc kubenswrapper[4822]: I0224 09:25:20.988865 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f5d244a0-7be8-4ea4-b8aa-d4d461cb1146","Type":"ContainerStarted","Data":"f8bd3b0a01cbed293325e66f78904c6911e43134a891f80cc2b088c684e00b40"} Feb 24 09:25:20 crc kubenswrapper[4822]: I0224 09:25:20.989815 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 24 09:25:21 crc kubenswrapper[4822]: I0224 09:25:21.023194 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.593352857 podStartE2EDuration="53.023171424s" podCreationTimestamp="2026-02-24 09:24:28 +0000 UTC" firstStartedPulling="2026-02-24 09:24:30.613592004 +0000 UTC m=+993.001354552" lastFinishedPulling="2026-02-24 09:24:46.043410571 +0000 UTC m=+1008.431173119" observedRunningTime="2026-02-24 09:25:21.017346444 +0000 UTC m=+1043.405108992" watchObservedRunningTime="2026-02-24 09:25:21.023171424 +0000 UTC m=+1043.410933982" Feb 24 09:25:21 crc kubenswrapper[4822]: I0224 09:25:21.068653 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.338565452 podStartE2EDuration="53.068632311s" podCreationTimestamp="2026-02-24 09:24:28 +0000 UTC" firstStartedPulling="2026-02-24 09:24:30.521860696 +0000 UTC m=+992.909623244" lastFinishedPulling="2026-02-24 09:24:46.251927555 +0000 UTC m=+1008.639690103" observedRunningTime="2026-02-24 09:25:21.059640474 +0000 UTC m=+1043.447403022" watchObservedRunningTime="2026-02-24 09:25:21.068632311 +0000 UTC m=+1043.456394869" Feb 24 09:25:22 crc kubenswrapper[4822]: I0224 09:25:22.102479 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39120: no serving certificate available for the kubelet" Feb 24 09:25:22 crc kubenswrapper[4822]: I0224 09:25:22.528496 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e0adc7aa-1b44-4523-8103-6a1714eb2432-etc-swift\") pod \"swift-storage-0\" (UID: \"e0adc7aa-1b44-4523-8103-6a1714eb2432\") " pod="openstack/swift-storage-0" Feb 24 09:25:22 crc kubenswrapper[4822]: I0224 09:25:22.534550 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e0adc7aa-1b44-4523-8103-6a1714eb2432-etc-swift\") pod \"swift-storage-0\" (UID: \"e0adc7aa-1b44-4523-8103-6a1714eb2432\") " pod="openstack/swift-storage-0" Feb 24 09:25:22 crc kubenswrapper[4822]: I0224 09:25:22.742017 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 24 09:25:23 crc kubenswrapper[4822]: I0224 09:25:23.316403 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 24 09:25:23 crc kubenswrapper[4822]: W0224 09:25:23.319461 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0adc7aa_1b44_4523_8103_6a1714eb2432.slice/crio-9a816f8216a527f27cea62a8d795a0cdc84e8eb695d7730b82be0e331df2fdf0 WatchSource:0}: Error finding container 9a816f8216a527f27cea62a8d795a0cdc84e8eb695d7730b82be0e331df2fdf0: Status 404 returned error can't find the container with id 9a816f8216a527f27cea62a8d795a0cdc84e8eb695d7730b82be0e331df2fdf0 Feb 24 09:25:23 crc kubenswrapper[4822]: I0224 09:25:23.695818 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39134: no serving certificate available for the kubelet" Feb 24 09:25:24 crc kubenswrapper[4822]: I0224 09:25:24.014763 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e0adc7aa-1b44-4523-8103-6a1714eb2432","Type":"ContainerStarted","Data":"9a816f8216a527f27cea62a8d795a0cdc84e8eb695d7730b82be0e331df2fdf0"} Feb 24 09:25:24 crc kubenswrapper[4822]: I0224 09:25:24.695649 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 24 09:25:25 crc kubenswrapper[4822]: I0224 09:25:25.156367 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39138: no serving certificate available for the kubelet" Feb 24 09:25:26 crc kubenswrapper[4822]: I0224 09:25:26.033520 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e0adc7aa-1b44-4523-8103-6a1714eb2432","Type":"ContainerStarted","Data":"fe8e2496f32ebd95bf73ba3d4daf77115aeb07985fa240af86639ef58d0b6cb9"} Feb 24 09:25:26 crc kubenswrapper[4822]: I0224 09:25:26.742298 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39140: no serving certificate available for the kubelet" Feb 24 09:25:27 crc kubenswrapper[4822]: I0224 09:25:27.047114 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e0adc7aa-1b44-4523-8103-6a1714eb2432","Type":"ContainerStarted","Data":"e908b7144475e8f533240a4f97b286c6e27eddbcc57bcbb95bec373375d008bb"} Feb 24 09:25:27 crc kubenswrapper[4822]: I0224 09:25:27.047156 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e0adc7aa-1b44-4523-8103-6a1714eb2432","Type":"ContainerStarted","Data":"a32ee567ef5d073608d3c092f25b8815a13898f9d375f0445e5377cb7b2a2e2a"} Feb 24 09:25:27 crc kubenswrapper[4822]: I0224 09:25:27.047166 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e0adc7aa-1b44-4523-8103-6a1714eb2432","Type":"ContainerStarted","Data":"b8e8292d615c7f6d3057ccab3f4297ed5d6cf5d1b6d8aa3255d9aca766a161e4"} Feb 24 09:25:28 crc kubenswrapper[4822]: I0224 09:25:28.059616 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e0adc7aa-1b44-4523-8103-6a1714eb2432","Type":"ContainerStarted","Data":"e0b32a25fdb004368608fe1b9d2f877e46860f5734d87844674ed2dee2da8126"} Feb 24 09:25:28 crc kubenswrapper[4822]: I0224 09:25:28.061050 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e0adc7aa-1b44-4523-8103-6a1714eb2432","Type":"ContainerStarted","Data":"47f4892f3053fa8be1c808077ae57492f90cfc6bd3111c58f1220db4d1cf0d67"} Feb 24 09:25:28 crc kubenswrapper[4822]: I0224 09:25:28.061071 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e0adc7aa-1b44-4523-8103-6a1714eb2432","Type":"ContainerStarted","Data":"1e98c65d3c5c7fa13f78d23445b3ef9d7fe3eebbae22de9378b3d48048a63f53"} Feb 24 09:25:28 crc kubenswrapper[4822]: I0224 09:25:28.196671 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39154: no serving certificate available for the kubelet" Feb 24 09:25:28 crc kubenswrapper[4822]: I0224 09:25:28.977580 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-tmwwp" Feb 24 09:25:28 crc kubenswrapper[4822]: I0224 09:25:28.980355 4822 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-kp7bj" podUID="33ac8d7e-339a-4b57-8d8d-38393ee4f9ce" containerName="ovn-controller" probeResult="failure" output=< Feb 24 09:25:28 crc kubenswrapper[4822]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 24 09:25:28 crc kubenswrapper[4822]: > Feb 24 09:25:28 crc kubenswrapper[4822]: I0224 09:25:28.994558 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-tmwwp" Feb 24 09:25:29 crc kubenswrapper[4822]: I0224 09:25:29.075678 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e0adc7aa-1b44-4523-8103-6a1714eb2432","Type":"ContainerStarted","Data":"7b71f965dec934505848d3b0059e0e7a02a58fea8f10cc824d0836645e911799"} Feb 24 09:25:29 crc kubenswrapper[4822]: I0224 09:25:29.223182 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-kp7bj-config-tjpbx"] Feb 24 09:25:29 crc kubenswrapper[4822]: E0224 09:25:29.223455 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="165384c0-7e5e-45a4-bdb5-2ab821dd7ff1" containerName="init" Feb 24 09:25:29 crc kubenswrapper[4822]: I0224 09:25:29.223471 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="165384c0-7e5e-45a4-bdb5-2ab821dd7ff1" containerName="init" Feb 24 09:25:29 crc kubenswrapper[4822]: E0224 09:25:29.223482 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="165384c0-7e5e-45a4-bdb5-2ab821dd7ff1" containerName="dnsmasq-dns" Feb 24 09:25:29 crc kubenswrapper[4822]: I0224 09:25:29.223490 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="165384c0-7e5e-45a4-bdb5-2ab821dd7ff1" containerName="dnsmasq-dns" Feb 24 09:25:29 crc kubenswrapper[4822]: E0224 09:25:29.223509 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a3bc793-65c3-4a7b-8db1-4269d70f493a" containerName="swift-ring-rebalance" Feb 24 09:25:29 crc kubenswrapper[4822]: I0224 09:25:29.223515 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a3bc793-65c3-4a7b-8db1-4269d70f493a" containerName="swift-ring-rebalance" Feb 24 09:25:29 crc kubenswrapper[4822]: E0224 09:25:29.223527 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b" containerName="init" Feb 24 09:25:29 crc kubenswrapper[4822]: I0224 09:25:29.223533 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b" containerName="init" Feb 24 09:25:29 crc kubenswrapper[4822]: I0224 09:25:29.223672 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a3bc793-65c3-4a7b-8db1-4269d70f493a" containerName="swift-ring-rebalance" Feb 24 09:25:29 crc kubenswrapper[4822]: I0224 09:25:29.223686 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="165384c0-7e5e-45a4-bdb5-2ab821dd7ff1" containerName="dnsmasq-dns" Feb 24 09:25:29 crc kubenswrapper[4822]: I0224 09:25:29.223697 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="526a26e2-5a9a-4d0a-9b9c-92f5f32f2a3b" containerName="init" Feb 24 09:25:29 crc kubenswrapper[4822]: I0224 09:25:29.224172 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kp7bj-config-tjpbx" Feb 24 09:25:29 crc kubenswrapper[4822]: I0224 09:25:29.226894 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 24 09:25:29 crc kubenswrapper[4822]: I0224 09:25:29.236658 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kp7bj-config-tjpbx"] Feb 24 09:25:29 crc kubenswrapper[4822]: I0224 09:25:29.329787 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d2c010ad-439f-4ab8-b5e1-377cfd32ac31-var-run\") pod \"ovn-controller-kp7bj-config-tjpbx\" (UID: \"d2c010ad-439f-4ab8-b5e1-377cfd32ac31\") " pod="openstack/ovn-controller-kp7bj-config-tjpbx" Feb 24 09:25:29 crc kubenswrapper[4822]: I0224 09:25:29.329890 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2c010ad-439f-4ab8-b5e1-377cfd32ac31-scripts\") pod \"ovn-controller-kp7bj-config-tjpbx\" (UID: \"d2c010ad-439f-4ab8-b5e1-377cfd32ac31\") " pod="openstack/ovn-controller-kp7bj-config-tjpbx" Feb 24 09:25:29 crc kubenswrapper[4822]: I0224 09:25:29.329956 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d2c010ad-439f-4ab8-b5e1-377cfd32ac31-var-run-ovn\") pod \"ovn-controller-kp7bj-config-tjpbx\" (UID: \"d2c010ad-439f-4ab8-b5e1-377cfd32ac31\") " pod="openstack/ovn-controller-kp7bj-config-tjpbx" Feb 24 09:25:29 crc kubenswrapper[4822]: I0224 09:25:29.330003 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d2c010ad-439f-4ab8-b5e1-377cfd32ac31-additional-scripts\") pod \"ovn-controller-kp7bj-config-tjpbx\" (UID: \"d2c010ad-439f-4ab8-b5e1-377cfd32ac31\") " pod="openstack/ovn-controller-kp7bj-config-tjpbx" Feb 24 09:25:29 crc kubenswrapper[4822]: I0224 09:25:29.330026 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d2c010ad-439f-4ab8-b5e1-377cfd32ac31-var-log-ovn\") pod \"ovn-controller-kp7bj-config-tjpbx\" (UID: \"d2c010ad-439f-4ab8-b5e1-377cfd32ac31\") " pod="openstack/ovn-controller-kp7bj-config-tjpbx" Feb 24 09:25:29 crc kubenswrapper[4822]: I0224 09:25:29.330138 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spdmx\" (UniqueName: \"kubernetes.io/projected/d2c010ad-439f-4ab8-b5e1-377cfd32ac31-kube-api-access-spdmx\") pod \"ovn-controller-kp7bj-config-tjpbx\" (UID: \"d2c010ad-439f-4ab8-b5e1-377cfd32ac31\") " pod="openstack/ovn-controller-kp7bj-config-tjpbx" Feb 24 09:25:29 crc kubenswrapper[4822]: I0224 09:25:29.432018 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2c010ad-439f-4ab8-b5e1-377cfd32ac31-scripts\") pod \"ovn-controller-kp7bj-config-tjpbx\" (UID: \"d2c010ad-439f-4ab8-b5e1-377cfd32ac31\") " pod="openstack/ovn-controller-kp7bj-config-tjpbx" Feb 24 09:25:29 crc kubenswrapper[4822]: I0224 09:25:29.432128 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d2c010ad-439f-4ab8-b5e1-377cfd32ac31-var-run-ovn\") pod \"ovn-controller-kp7bj-config-tjpbx\" (UID: \"d2c010ad-439f-4ab8-b5e1-377cfd32ac31\") " pod="openstack/ovn-controller-kp7bj-config-tjpbx" Feb 24 09:25:29 crc kubenswrapper[4822]: I0224 09:25:29.432184 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d2c010ad-439f-4ab8-b5e1-377cfd32ac31-additional-scripts\") pod \"ovn-controller-kp7bj-config-tjpbx\" (UID: \"d2c010ad-439f-4ab8-b5e1-377cfd32ac31\") " pod="openstack/ovn-controller-kp7bj-config-tjpbx" Feb 24 09:25:29 crc kubenswrapper[4822]: I0224 09:25:29.432216 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d2c010ad-439f-4ab8-b5e1-377cfd32ac31-var-log-ovn\") pod \"ovn-controller-kp7bj-config-tjpbx\" (UID: \"d2c010ad-439f-4ab8-b5e1-377cfd32ac31\") " pod="openstack/ovn-controller-kp7bj-config-tjpbx" Feb 24 09:25:29 crc kubenswrapper[4822]: I0224 09:25:29.432246 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spdmx\" (UniqueName: \"kubernetes.io/projected/d2c010ad-439f-4ab8-b5e1-377cfd32ac31-kube-api-access-spdmx\") pod \"ovn-controller-kp7bj-config-tjpbx\" (UID: \"d2c010ad-439f-4ab8-b5e1-377cfd32ac31\") " pod="openstack/ovn-controller-kp7bj-config-tjpbx" Feb 24 09:25:29 crc kubenswrapper[4822]: I0224 09:25:29.432283 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d2c010ad-439f-4ab8-b5e1-377cfd32ac31-var-run\") pod \"ovn-controller-kp7bj-config-tjpbx\" (UID: \"d2c010ad-439f-4ab8-b5e1-377cfd32ac31\") " pod="openstack/ovn-controller-kp7bj-config-tjpbx" Feb 24 09:25:29 crc kubenswrapper[4822]: I0224 09:25:29.432666 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d2c010ad-439f-4ab8-b5e1-377cfd32ac31-var-run\") pod \"ovn-controller-kp7bj-config-tjpbx\" (UID: \"d2c010ad-439f-4ab8-b5e1-377cfd32ac31\") " pod="openstack/ovn-controller-kp7bj-config-tjpbx" Feb 24 09:25:29 crc kubenswrapper[4822]: I0224 09:25:29.434147 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d2c010ad-439f-4ab8-b5e1-377cfd32ac31-additional-scripts\") pod \"ovn-controller-kp7bj-config-tjpbx\" (UID: \"d2c010ad-439f-4ab8-b5e1-377cfd32ac31\") " pod="openstack/ovn-controller-kp7bj-config-tjpbx" Feb 24 09:25:29 crc kubenswrapper[4822]: I0224 09:25:29.434229 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d2c010ad-439f-4ab8-b5e1-377cfd32ac31-var-log-ovn\") pod \"ovn-controller-kp7bj-config-tjpbx\" (UID: \"d2c010ad-439f-4ab8-b5e1-377cfd32ac31\") " pod="openstack/ovn-controller-kp7bj-config-tjpbx" Feb 24 09:25:29 crc kubenswrapper[4822]: I0224 09:25:29.435093 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d2c010ad-439f-4ab8-b5e1-377cfd32ac31-var-run-ovn\") pod \"ovn-controller-kp7bj-config-tjpbx\" (UID: \"d2c010ad-439f-4ab8-b5e1-377cfd32ac31\") " pod="openstack/ovn-controller-kp7bj-config-tjpbx" Feb 24 09:25:29 crc kubenswrapper[4822]: I0224 09:25:29.435395 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2c010ad-439f-4ab8-b5e1-377cfd32ac31-scripts\") pod \"ovn-controller-kp7bj-config-tjpbx\" (UID: \"d2c010ad-439f-4ab8-b5e1-377cfd32ac31\") " pod="openstack/ovn-controller-kp7bj-config-tjpbx" Feb 24 09:25:29 crc kubenswrapper[4822]: I0224 09:25:29.452043 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spdmx\" (UniqueName: \"kubernetes.io/projected/d2c010ad-439f-4ab8-b5e1-377cfd32ac31-kube-api-access-spdmx\") pod \"ovn-controller-kp7bj-config-tjpbx\" (UID: \"d2c010ad-439f-4ab8-b5e1-377cfd32ac31\") " pod="openstack/ovn-controller-kp7bj-config-tjpbx" Feb 24 09:25:29 crc kubenswrapper[4822]: I0224 09:25:29.546376 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kp7bj-config-tjpbx" Feb 24 09:25:29 crc kubenswrapper[4822]: I0224 09:25:29.778255 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39162: no serving certificate available for the kubelet" Feb 24 09:25:30 crc kubenswrapper[4822]: I0224 09:25:30.057548 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kp7bj-config-tjpbx"] Feb 24 09:25:30 crc kubenswrapper[4822]: I0224 09:25:30.093735 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kp7bj-config-tjpbx" event={"ID":"d2c010ad-439f-4ab8-b5e1-377cfd32ac31","Type":"ContainerStarted","Data":"55704afd4c2a288eb88c32d46704580da6bab698c31eb9a3f060ab6a794d70d9"} Feb 24 09:25:30 crc kubenswrapper[4822]: I0224 09:25:30.236255 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 24 09:25:30 crc kubenswrapper[4822]: I0224 09:25:30.363369 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39164: no serving certificate available for the kubelet" Feb 24 09:25:30 crc kubenswrapper[4822]: I0224 09:25:30.494932 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39166: no serving certificate available for the kubelet" Feb 24 09:25:30 crc kubenswrapper[4822]: I0224 09:25:30.559856 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39180: no serving certificate available for the kubelet" Feb 24 09:25:30 crc kubenswrapper[4822]: I0224 09:25:30.712493 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39190: no serving certificate available for the kubelet" Feb 24 09:25:30 crc kubenswrapper[4822]: I0224 09:25:30.897630 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39198: no serving certificate available for the kubelet" Feb 24 09:25:30 crc kubenswrapper[4822]: I0224 09:25:30.999533 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39208: no serving certificate available for the kubelet" Feb 24 09:25:31 crc kubenswrapper[4822]: I0224 09:25:31.104554 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kp7bj-config-tjpbx" event={"ID":"d2c010ad-439f-4ab8-b5e1-377cfd32ac31","Type":"ContainerStarted","Data":"aa3634220747716586fdb2fde4c55ded2bd2a5dc0fb2f2d2a9491a88e1b9eb5b"} Feb 24 09:25:31 crc kubenswrapper[4822]: I0224 09:25:31.109022 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e0adc7aa-1b44-4523-8103-6a1714eb2432","Type":"ContainerStarted","Data":"baf3a913ad6a5ee9b27b212cd623b3a871273d747eeb60af1ea060002f88fc4d"} Feb 24 09:25:31 crc kubenswrapper[4822]: I0224 09:25:31.117926 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37458: no serving certificate available for the kubelet" Feb 24 09:25:31 crc kubenswrapper[4822]: I0224 09:25:31.127518 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-kp7bj-config-tjpbx" podStartSLOduration=2.127481813 podStartE2EDuration="2.127481813s" podCreationTimestamp="2026-02-24 09:25:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:25:31.127205966 +0000 UTC m=+1053.514968514" watchObservedRunningTime="2026-02-24 09:25:31.127481813 +0000 UTC m=+1053.515244421" Feb 24 09:25:31 crc kubenswrapper[4822]: I0224 09:25:31.236108 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37466: no serving certificate available for the kubelet" Feb 24 09:25:31 crc kubenswrapper[4822]: I0224 09:25:31.237863 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37482: no serving certificate available for the kubelet" Feb 24 09:25:32 crc kubenswrapper[4822]: I0224 09:25:32.007765 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37492: no serving certificate available for the kubelet" Feb 24 09:25:32 crc kubenswrapper[4822]: I0224 09:25:32.125072 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e0adc7aa-1b44-4523-8103-6a1714eb2432","Type":"ContainerStarted","Data":"e4e82371994adab35db8814a73a668d3257b26c298b36f022909e3cabafbc981"} Feb 24 09:25:32 crc kubenswrapper[4822]: I0224 09:25:32.125137 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e0adc7aa-1b44-4523-8103-6a1714eb2432","Type":"ContainerStarted","Data":"a48d51ccff20b6a21053937d633d21acb96a03a0319dca1bbcac8cae8394c512"} Feb 24 09:25:32 crc kubenswrapper[4822]: I0224 09:25:32.125169 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e0adc7aa-1b44-4523-8103-6a1714eb2432","Type":"ContainerStarted","Data":"3a3ecb4ab822b481cc9cc0a24238483dcb154ed9730ec83acdde8cf1901ab9d3"} Feb 24 09:25:32 crc kubenswrapper[4822]: I0224 09:25:32.125186 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e0adc7aa-1b44-4523-8103-6a1714eb2432","Type":"ContainerStarted","Data":"e95e2b9fb0d4ee8a385aa77af3f39dd44f55a91ec95e2a3f260c1ad698da7381"} Feb 24 09:25:32 crc kubenswrapper[4822]: I0224 09:25:32.125205 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e0adc7aa-1b44-4523-8103-6a1714eb2432","Type":"ContainerStarted","Data":"6f21745262fd63a646c943195fd13f2612ee178680f9e21fd3210ee49f44f67c"} Feb 24 09:25:32 crc kubenswrapper[4822]: I0224 09:25:32.125223 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e0adc7aa-1b44-4523-8103-6a1714eb2432","Type":"ContainerStarted","Data":"b9d7c689ded31e9fc321c1cc83ed42be99225fbaa63989b6b22a7555e489e6f7"} Feb 24 09:25:32 crc kubenswrapper[4822]: I0224 09:25:32.135501 4822 generic.go:334] "Generic (PLEG): container finished" podID="d2c010ad-439f-4ab8-b5e1-377cfd32ac31" containerID="aa3634220747716586fdb2fde4c55ded2bd2a5dc0fb2f2d2a9491a88e1b9eb5b" exitCode=0 Feb 24 09:25:32 crc kubenswrapper[4822]: I0224 09:25:32.135560 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kp7bj-config-tjpbx" event={"ID":"d2c010ad-439f-4ab8-b5e1-377cfd32ac31","Type":"ContainerDied","Data":"aa3634220747716586fdb2fde4c55ded2bd2a5dc0fb2f2d2a9491a88e1b9eb5b"} Feb 24 09:25:32 crc kubenswrapper[4822]: I0224 09:25:32.235424 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=21.020395338 podStartE2EDuration="27.235404171s" podCreationTimestamp="2026-02-24 09:25:05 +0000 UTC" firstStartedPulling="2026-02-24 09:25:23.321627684 +0000 UTC m=+1045.709390232" lastFinishedPulling="2026-02-24 09:25:29.536636517 +0000 UTC m=+1051.924399065" observedRunningTime="2026-02-24 09:25:32.229970922 +0000 UTC m=+1054.617733470" watchObservedRunningTime="2026-02-24 09:25:32.235404171 +0000 UTC m=+1054.623166719" Feb 24 09:25:32 crc kubenswrapper[4822]: I0224 09:25:32.536590 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-mflb6"] Feb 24 09:25:32 crc kubenswrapper[4822]: I0224 09:25:32.538409 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-mflb6" Feb 24 09:25:32 crc kubenswrapper[4822]: I0224 09:25:32.546340 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 24 09:25:32 crc kubenswrapper[4822]: I0224 09:25:32.553222 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-mflb6"] Feb 24 09:25:32 crc kubenswrapper[4822]: I0224 09:25:32.612716 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d22f973-b3d8-49ee-8ca4-fd6bfbcab7e8-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-mflb6\" (UID: \"0d22f973-b3d8-49ee-8ca4-fd6bfbcab7e8\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-mflb6" Feb 24 09:25:32 crc kubenswrapper[4822]: I0224 09:25:32.612758 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8kkj\" (UniqueName: \"kubernetes.io/projected/0d22f973-b3d8-49ee-8ca4-fd6bfbcab7e8-kube-api-access-f8kkj\") pod \"dnsmasq-dns-6d5b6d6b67-mflb6\" (UID: \"0d22f973-b3d8-49ee-8ca4-fd6bfbcab7e8\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-mflb6" Feb 24 09:25:32 crc kubenswrapper[4822]: I0224 09:25:32.612781 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d22f973-b3d8-49ee-8ca4-fd6bfbcab7e8-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-mflb6\" (UID: \"0d22f973-b3d8-49ee-8ca4-fd6bfbcab7e8\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-mflb6" Feb 24 09:25:32 crc kubenswrapper[4822]: I0224 09:25:32.612805 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d22f973-b3d8-49ee-8ca4-fd6bfbcab7e8-config\") pod \"dnsmasq-dns-6d5b6d6b67-mflb6\" (UID: \"0d22f973-b3d8-49ee-8ca4-fd6bfbcab7e8\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-mflb6" Feb 24 09:25:32 crc kubenswrapper[4822]: I0224 09:25:32.613041 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d22f973-b3d8-49ee-8ca4-fd6bfbcab7e8-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-mflb6\" (UID: \"0d22f973-b3d8-49ee-8ca4-fd6bfbcab7e8\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-mflb6" Feb 24 09:25:32 crc kubenswrapper[4822]: I0224 09:25:32.613079 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d22f973-b3d8-49ee-8ca4-fd6bfbcab7e8-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-mflb6\" (UID: \"0d22f973-b3d8-49ee-8ca4-fd6bfbcab7e8\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-mflb6" Feb 24 09:25:32 crc kubenswrapper[4822]: I0224 09:25:32.714583 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d22f973-b3d8-49ee-8ca4-fd6bfbcab7e8-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-mflb6\" (UID: \"0d22f973-b3d8-49ee-8ca4-fd6bfbcab7e8\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-mflb6" Feb 24 09:25:32 crc kubenswrapper[4822]: I0224 09:25:32.714629 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8kkj\" (UniqueName: \"kubernetes.io/projected/0d22f973-b3d8-49ee-8ca4-fd6bfbcab7e8-kube-api-access-f8kkj\") pod \"dnsmasq-dns-6d5b6d6b67-mflb6\" (UID: \"0d22f973-b3d8-49ee-8ca4-fd6bfbcab7e8\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-mflb6" Feb 24 09:25:32 crc kubenswrapper[4822]: I0224 09:25:32.714647 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d22f973-b3d8-49ee-8ca4-fd6bfbcab7e8-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-mflb6\" (UID: \"0d22f973-b3d8-49ee-8ca4-fd6bfbcab7e8\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-mflb6" Feb 24 09:25:32 crc kubenswrapper[4822]: I0224 09:25:32.714672 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d22f973-b3d8-49ee-8ca4-fd6bfbcab7e8-config\") pod \"dnsmasq-dns-6d5b6d6b67-mflb6\" (UID: \"0d22f973-b3d8-49ee-8ca4-fd6bfbcab7e8\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-mflb6" Feb 24 09:25:32 crc kubenswrapper[4822]: I0224 09:25:32.714706 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d22f973-b3d8-49ee-8ca4-fd6bfbcab7e8-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-mflb6\" (UID: \"0d22f973-b3d8-49ee-8ca4-fd6bfbcab7e8\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-mflb6" Feb 24 09:25:32 crc kubenswrapper[4822]: I0224 09:25:32.714723 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d22f973-b3d8-49ee-8ca4-fd6bfbcab7e8-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-mflb6\" (UID: \"0d22f973-b3d8-49ee-8ca4-fd6bfbcab7e8\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-mflb6" Feb 24 09:25:32 crc kubenswrapper[4822]: I0224 09:25:32.715549 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d22f973-b3d8-49ee-8ca4-fd6bfbcab7e8-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-mflb6\" (UID: \"0d22f973-b3d8-49ee-8ca4-fd6bfbcab7e8\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-mflb6" Feb 24 09:25:32 crc kubenswrapper[4822]: I0224 09:25:32.715579 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d22f973-b3d8-49ee-8ca4-fd6bfbcab7e8-config\") pod \"dnsmasq-dns-6d5b6d6b67-mflb6\" (UID: \"0d22f973-b3d8-49ee-8ca4-fd6bfbcab7e8\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-mflb6" Feb 24 09:25:32 crc kubenswrapper[4822]: I0224 09:25:32.715706 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d22f973-b3d8-49ee-8ca4-fd6bfbcab7e8-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-mflb6\" (UID: \"0d22f973-b3d8-49ee-8ca4-fd6bfbcab7e8\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-mflb6" Feb 24 09:25:32 crc kubenswrapper[4822]: I0224 09:25:32.715827 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d22f973-b3d8-49ee-8ca4-fd6bfbcab7e8-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-mflb6\" (UID: \"0d22f973-b3d8-49ee-8ca4-fd6bfbcab7e8\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-mflb6" Feb 24 09:25:32 crc kubenswrapper[4822]: I0224 09:25:32.716624 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d22f973-b3d8-49ee-8ca4-fd6bfbcab7e8-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-mflb6\" (UID: \"0d22f973-b3d8-49ee-8ca4-fd6bfbcab7e8\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-mflb6" Feb 24 09:25:32 crc kubenswrapper[4822]: I0224 09:25:32.736642 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8kkj\" (UniqueName: \"kubernetes.io/projected/0d22f973-b3d8-49ee-8ca4-fd6bfbcab7e8-kube-api-access-f8kkj\") pod \"dnsmasq-dns-6d5b6d6b67-mflb6\" (UID: \"0d22f973-b3d8-49ee-8ca4-fd6bfbcab7e8\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-mflb6" Feb 24 09:25:32 crc kubenswrapper[4822]: I0224 09:25:32.817313 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37498: no serving certificate available for the kubelet" Feb 24 09:25:32 crc kubenswrapper[4822]: I0224 09:25:32.864070 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-mflb6" Feb 24 09:25:33 crc kubenswrapper[4822]: I0224 09:25:33.335239 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-mflb6"] Feb 24 09:25:33 crc kubenswrapper[4822]: I0224 09:25:33.375242 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37512: no serving certificate available for the kubelet" Feb 24 09:25:33 crc kubenswrapper[4822]: I0224 09:25:33.533879 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kp7bj-config-tjpbx" Feb 24 09:25:33 crc kubenswrapper[4822]: I0224 09:25:33.631853 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d2c010ad-439f-4ab8-b5e1-377cfd32ac31-additional-scripts\") pod \"d2c010ad-439f-4ab8-b5e1-377cfd32ac31\" (UID: \"d2c010ad-439f-4ab8-b5e1-377cfd32ac31\") " Feb 24 09:25:33 crc kubenswrapper[4822]: I0224 09:25:33.631932 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spdmx\" (UniqueName: \"kubernetes.io/projected/d2c010ad-439f-4ab8-b5e1-377cfd32ac31-kube-api-access-spdmx\") pod \"d2c010ad-439f-4ab8-b5e1-377cfd32ac31\" (UID: \"d2c010ad-439f-4ab8-b5e1-377cfd32ac31\") " Feb 24 09:25:33 crc kubenswrapper[4822]: I0224 09:25:33.632012 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d2c010ad-439f-4ab8-b5e1-377cfd32ac31-var-run-ovn\") pod \"d2c010ad-439f-4ab8-b5e1-377cfd32ac31\" (UID: \"d2c010ad-439f-4ab8-b5e1-377cfd32ac31\") " Feb 24 09:25:33 crc kubenswrapper[4822]: I0224 09:25:33.632073 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2c010ad-439f-4ab8-b5e1-377cfd32ac31-scripts\") pod \"d2c010ad-439f-4ab8-b5e1-377cfd32ac31\" (UID: \"d2c010ad-439f-4ab8-b5e1-377cfd32ac31\") " Feb 24 09:25:33 crc kubenswrapper[4822]: I0224 09:25:33.632127 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d2c010ad-439f-4ab8-b5e1-377cfd32ac31-var-run\") pod \"d2c010ad-439f-4ab8-b5e1-377cfd32ac31\" (UID: \"d2c010ad-439f-4ab8-b5e1-377cfd32ac31\") " Feb 24 09:25:33 crc kubenswrapper[4822]: I0224 09:25:33.632171 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d2c010ad-439f-4ab8-b5e1-377cfd32ac31-var-log-ovn\") pod \"d2c010ad-439f-4ab8-b5e1-377cfd32ac31\" (UID: \"d2c010ad-439f-4ab8-b5e1-377cfd32ac31\") " Feb 24 09:25:33 crc kubenswrapper[4822]: I0224 09:25:33.632215 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2c010ad-439f-4ab8-b5e1-377cfd32ac31-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "d2c010ad-439f-4ab8-b5e1-377cfd32ac31" (UID: "d2c010ad-439f-4ab8-b5e1-377cfd32ac31"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:25:33 crc kubenswrapper[4822]: I0224 09:25:33.632227 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2c010ad-439f-4ab8-b5e1-377cfd32ac31-var-run" (OuterVolumeSpecName: "var-run") pod "d2c010ad-439f-4ab8-b5e1-377cfd32ac31" (UID: "d2c010ad-439f-4ab8-b5e1-377cfd32ac31"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:25:33 crc kubenswrapper[4822]: I0224 09:25:33.632356 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2c010ad-439f-4ab8-b5e1-377cfd32ac31-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "d2c010ad-439f-4ab8-b5e1-377cfd32ac31" (UID: "d2c010ad-439f-4ab8-b5e1-377cfd32ac31"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:25:33 crc kubenswrapper[4822]: I0224 09:25:33.632697 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2c010ad-439f-4ab8-b5e1-377cfd32ac31-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "d2c010ad-439f-4ab8-b5e1-377cfd32ac31" (UID: "d2c010ad-439f-4ab8-b5e1-377cfd32ac31"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:25:33 crc kubenswrapper[4822]: I0224 09:25:33.632800 4822 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d2c010ad-439f-4ab8-b5e1-377cfd32ac31-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:33 crc kubenswrapper[4822]: I0224 09:25:33.632819 4822 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d2c010ad-439f-4ab8-b5e1-377cfd32ac31-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:33 crc kubenswrapper[4822]: I0224 09:25:33.632828 4822 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d2c010ad-439f-4ab8-b5e1-377cfd32ac31-var-run\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:33 crc kubenswrapper[4822]: I0224 09:25:33.632837 4822 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d2c010ad-439f-4ab8-b5e1-377cfd32ac31-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:33 crc kubenswrapper[4822]: I0224 09:25:33.632937 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2c010ad-439f-4ab8-b5e1-377cfd32ac31-scripts" (OuterVolumeSpecName: "scripts") pod "d2c010ad-439f-4ab8-b5e1-377cfd32ac31" (UID: "d2c010ad-439f-4ab8-b5e1-377cfd32ac31"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:25:33 crc kubenswrapper[4822]: I0224 09:25:33.640091 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2c010ad-439f-4ab8-b5e1-377cfd32ac31-kube-api-access-spdmx" (OuterVolumeSpecName: "kube-api-access-spdmx") pod "d2c010ad-439f-4ab8-b5e1-377cfd32ac31" (UID: "d2c010ad-439f-4ab8-b5e1-377cfd32ac31"). InnerVolumeSpecName "kube-api-access-spdmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:25:33 crc kubenswrapper[4822]: I0224 09:25:33.734010 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spdmx\" (UniqueName: \"kubernetes.io/projected/d2c010ad-439f-4ab8-b5e1-377cfd32ac31-kube-api-access-spdmx\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:33 crc kubenswrapper[4822]: I0224 09:25:33.734047 4822 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2c010ad-439f-4ab8-b5e1-377cfd32ac31-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:33 crc kubenswrapper[4822]: I0224 09:25:33.967449 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-kp7bj" Feb 24 09:25:34 crc kubenswrapper[4822]: I0224 09:25:34.161872 4822 generic.go:334] "Generic (PLEG): container finished" podID="0d22f973-b3d8-49ee-8ca4-fd6bfbcab7e8" containerID="85caa317adb5ef35e3eacb449ea7f9800bb9fe91c23d9b5d1d6b005faaf3b878" exitCode=0 Feb 24 09:25:34 crc kubenswrapper[4822]: I0224 09:25:34.161976 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-mflb6" event={"ID":"0d22f973-b3d8-49ee-8ca4-fd6bfbcab7e8","Type":"ContainerDied","Data":"85caa317adb5ef35e3eacb449ea7f9800bb9fe91c23d9b5d1d6b005faaf3b878"} Feb 24 09:25:34 crc kubenswrapper[4822]: I0224 09:25:34.162036 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-mflb6" event={"ID":"0d22f973-b3d8-49ee-8ca4-fd6bfbcab7e8","Type":"ContainerStarted","Data":"4d797310f4bd4e7a4a69988d2beba5c5dba5359c956941a291e2129390d26cd8"} Feb 24 09:25:34 crc kubenswrapper[4822]: I0224 09:25:34.170616 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kp7bj-config-tjpbx" event={"ID":"d2c010ad-439f-4ab8-b5e1-377cfd32ac31","Type":"ContainerDied","Data":"55704afd4c2a288eb88c32d46704580da6bab698c31eb9a3f060ab6a794d70d9"} Feb 24 09:25:34 crc kubenswrapper[4822]: I0224 09:25:34.170654 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kp7bj-config-tjpbx" Feb 24 09:25:34 crc kubenswrapper[4822]: I0224 09:25:34.170674 4822 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55704afd4c2a288eb88c32d46704580da6bab698c31eb9a3f060ab6a794d70d9" Feb 24 09:25:34 crc kubenswrapper[4822]: I0224 09:25:34.239134 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-kp7bj-config-tjpbx"] Feb 24 09:25:34 crc kubenswrapper[4822]: I0224 09:25:34.254695 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-kp7bj-config-tjpbx"] Feb 24 09:25:34 crc kubenswrapper[4822]: I0224 09:25:34.280959 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-kp7bj-config-rprmh"] Feb 24 09:25:34 crc kubenswrapper[4822]: E0224 09:25:34.281322 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2c010ad-439f-4ab8-b5e1-377cfd32ac31" containerName="ovn-config" Feb 24 09:25:34 crc kubenswrapper[4822]: I0224 09:25:34.281342 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2c010ad-439f-4ab8-b5e1-377cfd32ac31" containerName="ovn-config" Feb 24 09:25:34 crc kubenswrapper[4822]: I0224 09:25:34.281534 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2c010ad-439f-4ab8-b5e1-377cfd32ac31" containerName="ovn-config" Feb 24 09:25:34 crc kubenswrapper[4822]: I0224 09:25:34.283380 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kp7bj-config-rprmh" Feb 24 09:25:34 crc kubenswrapper[4822]: I0224 09:25:34.285945 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 24 09:25:34 crc kubenswrapper[4822]: I0224 09:25:34.290097 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37528: no serving certificate available for the kubelet" Feb 24 09:25:34 crc kubenswrapper[4822]: I0224 09:25:34.295578 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kp7bj-config-rprmh"] Feb 24 09:25:34 crc kubenswrapper[4822]: I0224 09:25:34.345631 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3-var-run\") pod \"ovn-controller-kp7bj-config-rprmh\" (UID: \"41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3\") " pod="openstack/ovn-controller-kp7bj-config-rprmh" Feb 24 09:25:34 crc kubenswrapper[4822]: I0224 09:25:34.345688 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3-additional-scripts\") pod \"ovn-controller-kp7bj-config-rprmh\" (UID: \"41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3\") " pod="openstack/ovn-controller-kp7bj-config-rprmh" Feb 24 09:25:34 crc kubenswrapper[4822]: I0224 09:25:34.345728 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3-var-run-ovn\") pod \"ovn-controller-kp7bj-config-rprmh\" (UID: \"41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3\") " pod="openstack/ovn-controller-kp7bj-config-rprmh" Feb 24 09:25:34 crc kubenswrapper[4822]: I0224 09:25:34.345761 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3-var-log-ovn\") pod \"ovn-controller-kp7bj-config-rprmh\" (UID: \"41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3\") " pod="openstack/ovn-controller-kp7bj-config-rprmh" Feb 24 09:25:34 crc kubenswrapper[4822]: I0224 09:25:34.346083 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8l7f\" (UniqueName: \"kubernetes.io/projected/41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3-kube-api-access-c8l7f\") pod \"ovn-controller-kp7bj-config-rprmh\" (UID: \"41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3\") " pod="openstack/ovn-controller-kp7bj-config-rprmh" Feb 24 09:25:34 crc kubenswrapper[4822]: I0224 09:25:34.346128 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3-scripts\") pod \"ovn-controller-kp7bj-config-rprmh\" (UID: \"41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3\") " pod="openstack/ovn-controller-kp7bj-config-rprmh" Feb 24 09:25:34 crc kubenswrapper[4822]: I0224 09:25:34.357204 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2c010ad-439f-4ab8-b5e1-377cfd32ac31" path="/var/lib/kubelet/pods/d2c010ad-439f-4ab8-b5e1-377cfd32ac31/volumes" Feb 24 09:25:34 crc kubenswrapper[4822]: I0224 09:25:34.447713 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3-additional-scripts\") pod \"ovn-controller-kp7bj-config-rprmh\" (UID: \"41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3\") " pod="openstack/ovn-controller-kp7bj-config-rprmh" Feb 24 09:25:34 crc kubenswrapper[4822]: I0224 09:25:34.448239 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3-var-run-ovn\") pod \"ovn-controller-kp7bj-config-rprmh\" (UID: \"41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3\") " pod="openstack/ovn-controller-kp7bj-config-rprmh" Feb 24 09:25:34 crc kubenswrapper[4822]: I0224 09:25:34.448273 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3-var-log-ovn\") pod \"ovn-controller-kp7bj-config-rprmh\" (UID: \"41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3\") " pod="openstack/ovn-controller-kp7bj-config-rprmh" Feb 24 09:25:34 crc kubenswrapper[4822]: I0224 09:25:34.448395 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8l7f\" (UniqueName: \"kubernetes.io/projected/41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3-kube-api-access-c8l7f\") pod \"ovn-controller-kp7bj-config-rprmh\" (UID: \"41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3\") " pod="openstack/ovn-controller-kp7bj-config-rprmh" Feb 24 09:25:34 crc kubenswrapper[4822]: I0224 09:25:34.448422 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3-scripts\") pod \"ovn-controller-kp7bj-config-rprmh\" (UID: \"41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3\") " pod="openstack/ovn-controller-kp7bj-config-rprmh" Feb 24 09:25:34 crc kubenswrapper[4822]: I0224 09:25:34.448450 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3-var-run\") pod \"ovn-controller-kp7bj-config-rprmh\" (UID: \"41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3\") " pod="openstack/ovn-controller-kp7bj-config-rprmh" Feb 24 09:25:34 crc kubenswrapper[4822]: I0224 09:25:34.448688 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3-additional-scripts\") pod \"ovn-controller-kp7bj-config-rprmh\" (UID: \"41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3\") " pod="openstack/ovn-controller-kp7bj-config-rprmh" Feb 24 09:25:34 crc kubenswrapper[4822]: I0224 09:25:34.448757 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3-var-run\") pod \"ovn-controller-kp7bj-config-rprmh\" (UID: \"41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3\") " pod="openstack/ovn-controller-kp7bj-config-rprmh" Feb 24 09:25:34 crc kubenswrapper[4822]: I0224 09:25:34.448837 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3-var-log-ovn\") pod \"ovn-controller-kp7bj-config-rprmh\" (UID: \"41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3\") " pod="openstack/ovn-controller-kp7bj-config-rprmh" Feb 24 09:25:34 crc kubenswrapper[4822]: I0224 09:25:34.448843 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3-var-run-ovn\") pod \"ovn-controller-kp7bj-config-rprmh\" (UID: \"41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3\") " pod="openstack/ovn-controller-kp7bj-config-rprmh" Feb 24 09:25:34 crc kubenswrapper[4822]: I0224 09:25:34.452678 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3-scripts\") pod \"ovn-controller-kp7bj-config-rprmh\" (UID: \"41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3\") " pod="openstack/ovn-controller-kp7bj-config-rprmh" Feb 24 09:25:34 crc kubenswrapper[4822]: I0224 09:25:34.470707 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8l7f\" (UniqueName: \"kubernetes.io/projected/41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3-kube-api-access-c8l7f\") pod \"ovn-controller-kp7bj-config-rprmh\" (UID: \"41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3\") " pod="openstack/ovn-controller-kp7bj-config-rprmh" Feb 24 09:25:34 crc kubenswrapper[4822]: I0224 09:25:34.640403 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kp7bj-config-rprmh" Feb 24 09:25:35 crc kubenswrapper[4822]: I0224 09:25:35.189822 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-mflb6" event={"ID":"0d22f973-b3d8-49ee-8ca4-fd6bfbcab7e8","Type":"ContainerStarted","Data":"26a5d2f6494617944fd29d308b1d16fad2ee1b0d9c432e4be72622973d523899"} Feb 24 09:25:35 crc kubenswrapper[4822]: I0224 09:25:35.190379 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d5b6d6b67-mflb6" Feb 24 09:25:35 crc kubenswrapper[4822]: I0224 09:25:35.219627 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d5b6d6b67-mflb6" podStartSLOduration=3.219591464 podStartE2EDuration="3.219591464s" podCreationTimestamp="2026-02-24 09:25:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:25:35.216968842 +0000 UTC m=+1057.604731390" watchObservedRunningTime="2026-02-24 09:25:35.219591464 +0000 UTC m=+1057.607354012" Feb 24 09:25:35 crc kubenswrapper[4822]: I0224 09:25:35.245057 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kp7bj-config-rprmh"] Feb 24 09:25:35 crc kubenswrapper[4822]: W0224 09:25:35.248950 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41d0684a_ad9d_4542_aec0_5ed4ff4ad1d3.slice/crio-1a5c0db279d651e469fb718d09fd0719696849a2fba40c7d0a94fdbc6986db31 WatchSource:0}: Error finding container 1a5c0db279d651e469fb718d09fd0719696849a2fba40c7d0a94fdbc6986db31: Status 404 returned error can't find the container with id 1a5c0db279d651e469fb718d09fd0719696849a2fba40c7d0a94fdbc6986db31 Feb 24 09:25:35 crc kubenswrapper[4822]: I0224 09:25:35.862705 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37532: no serving certificate available for the kubelet" Feb 24 09:25:36 crc kubenswrapper[4822]: I0224 09:25:36.099730 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37542: no serving certificate available for the kubelet" Feb 24 09:25:36 crc kubenswrapper[4822]: I0224 09:25:36.202870 4822 generic.go:334] "Generic (PLEG): container finished" podID="41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3" containerID="c001883de7072e1111fe9960a44fc70ad21f1d36c876265ef963eb7a534ef3de" exitCode=0 Feb 24 09:25:36 crc kubenswrapper[4822]: I0224 09:25:36.204030 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kp7bj-config-rprmh" event={"ID":"41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3","Type":"ContainerDied","Data":"c001883de7072e1111fe9960a44fc70ad21f1d36c876265ef963eb7a534ef3de"} Feb 24 09:25:36 crc kubenswrapper[4822]: I0224 09:25:36.204109 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kp7bj-config-rprmh" event={"ID":"41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3","Type":"ContainerStarted","Data":"1a5c0db279d651e469fb718d09fd0719696849a2fba40c7d0a94fdbc6986db31"} Feb 24 09:25:37 crc kubenswrapper[4822]: I0224 09:25:37.357052 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37554: no serving certificate available for the kubelet" Feb 24 09:25:37 crc kubenswrapper[4822]: I0224 09:25:37.658713 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kp7bj-config-rprmh" Feb 24 09:25:37 crc kubenswrapper[4822]: I0224 09:25:37.709535 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3-scripts\") pod \"41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3\" (UID: \"41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3\") " Feb 24 09:25:37 crc kubenswrapper[4822]: I0224 09:25:37.709609 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3-var-run\") pod \"41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3\" (UID: \"41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3\") " Feb 24 09:25:37 crc kubenswrapper[4822]: I0224 09:25:37.709679 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3-var-run-ovn\") pod \"41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3\" (UID: \"41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3\") " Feb 24 09:25:37 crc kubenswrapper[4822]: I0224 09:25:37.709719 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8l7f\" (UniqueName: \"kubernetes.io/projected/41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3-kube-api-access-c8l7f\") pod \"41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3\" (UID: \"41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3\") " Feb 24 09:25:37 crc kubenswrapper[4822]: I0224 09:25:37.709775 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3-var-log-ovn\") pod \"41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3\" (UID: \"41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3\") " Feb 24 09:25:37 crc kubenswrapper[4822]: I0224 09:25:37.709784 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3-var-run" (OuterVolumeSpecName: "var-run") pod "41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3" (UID: "41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:25:37 crc kubenswrapper[4822]: I0224 09:25:37.709806 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3" (UID: "41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:25:37 crc kubenswrapper[4822]: I0224 09:25:37.709941 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3" (UID: "41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:25:37 crc kubenswrapper[4822]: I0224 09:25:37.710008 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3-additional-scripts\") pod \"41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3\" (UID: \"41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3\") " Feb 24 09:25:37 crc kubenswrapper[4822]: I0224 09:25:37.710872 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3" (UID: "41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:25:37 crc kubenswrapper[4822]: I0224 09:25:37.711368 4822 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:37 crc kubenswrapper[4822]: I0224 09:25:37.711401 4822 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:37 crc kubenswrapper[4822]: I0224 09:25:37.711419 4822 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:37 crc kubenswrapper[4822]: I0224 09:25:37.711437 4822 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3-var-run\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:37 crc kubenswrapper[4822]: I0224 09:25:37.711990 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3-scripts" (OuterVolumeSpecName: "scripts") pod "41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3" (UID: "41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:25:37 crc kubenswrapper[4822]: I0224 09:25:37.718849 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3-kube-api-access-c8l7f" (OuterVolumeSpecName: "kube-api-access-c8l7f") pod "41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3" (UID: "41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3"). InnerVolumeSpecName "kube-api-access-c8l7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:25:37 crc kubenswrapper[4822]: I0224 09:25:37.812746 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8l7f\" (UniqueName: \"kubernetes.io/projected/41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3-kube-api-access-c8l7f\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:37 crc kubenswrapper[4822]: I0224 09:25:37.813092 4822 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:38 crc kubenswrapper[4822]: I0224 09:25:38.225322 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kp7bj-config-rprmh" event={"ID":"41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3","Type":"ContainerDied","Data":"1a5c0db279d651e469fb718d09fd0719696849a2fba40c7d0a94fdbc6986db31"} Feb 24 09:25:38 crc kubenswrapper[4822]: I0224 09:25:38.225362 4822 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a5c0db279d651e469fb718d09fd0719696849a2fba40c7d0a94fdbc6986db31" Feb 24 09:25:38 crc kubenswrapper[4822]: I0224 09:25:38.225433 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kp7bj-config-rprmh" Feb 24 09:25:38 crc kubenswrapper[4822]: I0224 09:25:38.764041 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-kp7bj-config-rprmh"] Feb 24 09:25:38 crc kubenswrapper[4822]: I0224 09:25:38.775543 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-kp7bj-config-rprmh"] Feb 24 09:25:38 crc kubenswrapper[4822]: I0224 09:25:38.825324 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-kp7bj-config-xp8sz"] Feb 24 09:25:38 crc kubenswrapper[4822]: E0224 09:25:38.825753 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3" containerName="ovn-config" Feb 24 09:25:38 crc kubenswrapper[4822]: I0224 09:25:38.825781 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3" containerName="ovn-config" Feb 24 09:25:38 crc kubenswrapper[4822]: I0224 09:25:38.826085 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3" containerName="ovn-config" Feb 24 09:25:38 crc kubenswrapper[4822]: I0224 09:25:38.826724 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kp7bj-config-xp8sz" Feb 24 09:25:38 crc kubenswrapper[4822]: I0224 09:25:38.832742 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 24 09:25:38 crc kubenswrapper[4822]: I0224 09:25:38.840146 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kp7bj-config-xp8sz"] Feb 24 09:25:38 crc kubenswrapper[4822]: I0224 09:25:38.929365 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37564: no serving certificate available for the kubelet" Feb 24 09:25:38 crc kubenswrapper[4822]: I0224 09:25:38.934785 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a64bb802-2e95-48fe-9acf-af2ae240e5cf-scripts\") pod \"ovn-controller-kp7bj-config-xp8sz\" (UID: \"a64bb802-2e95-48fe-9acf-af2ae240e5cf\") " pod="openstack/ovn-controller-kp7bj-config-xp8sz" Feb 24 09:25:38 crc kubenswrapper[4822]: I0224 09:25:38.934834 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a64bb802-2e95-48fe-9acf-af2ae240e5cf-var-run-ovn\") pod \"ovn-controller-kp7bj-config-xp8sz\" (UID: \"a64bb802-2e95-48fe-9acf-af2ae240e5cf\") " pod="openstack/ovn-controller-kp7bj-config-xp8sz" Feb 24 09:25:38 crc kubenswrapper[4822]: I0224 09:25:38.935423 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a64bb802-2e95-48fe-9acf-af2ae240e5cf-additional-scripts\") pod \"ovn-controller-kp7bj-config-xp8sz\" (UID: \"a64bb802-2e95-48fe-9acf-af2ae240e5cf\") " pod="openstack/ovn-controller-kp7bj-config-xp8sz" Feb 24 09:25:38 crc kubenswrapper[4822]: I0224 09:25:38.935491 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a64bb802-2e95-48fe-9acf-af2ae240e5cf-var-run\") pod \"ovn-controller-kp7bj-config-xp8sz\" (UID: \"a64bb802-2e95-48fe-9acf-af2ae240e5cf\") " pod="openstack/ovn-controller-kp7bj-config-xp8sz" Feb 24 09:25:38 crc kubenswrapper[4822]: I0224 09:25:38.935517 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a64bb802-2e95-48fe-9acf-af2ae240e5cf-var-log-ovn\") pod \"ovn-controller-kp7bj-config-xp8sz\" (UID: \"a64bb802-2e95-48fe-9acf-af2ae240e5cf\") " pod="openstack/ovn-controller-kp7bj-config-xp8sz" Feb 24 09:25:38 crc kubenswrapper[4822]: I0224 09:25:38.935559 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9bgw\" (UniqueName: \"kubernetes.io/projected/a64bb802-2e95-48fe-9acf-af2ae240e5cf-kube-api-access-j9bgw\") pod \"ovn-controller-kp7bj-config-xp8sz\" (UID: \"a64bb802-2e95-48fe-9acf-af2ae240e5cf\") " pod="openstack/ovn-controller-kp7bj-config-xp8sz" Feb 24 09:25:39 crc kubenswrapper[4822]: I0224 09:25:39.037793 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a64bb802-2e95-48fe-9acf-af2ae240e5cf-additional-scripts\") pod \"ovn-controller-kp7bj-config-xp8sz\" (UID: \"a64bb802-2e95-48fe-9acf-af2ae240e5cf\") " pod="openstack/ovn-controller-kp7bj-config-xp8sz" Feb 24 09:25:39 crc kubenswrapper[4822]: I0224 09:25:39.037941 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a64bb802-2e95-48fe-9acf-af2ae240e5cf-var-run\") pod \"ovn-controller-kp7bj-config-xp8sz\" (UID: \"a64bb802-2e95-48fe-9acf-af2ae240e5cf\") " pod="openstack/ovn-controller-kp7bj-config-xp8sz" Feb 24 09:25:39 crc kubenswrapper[4822]: I0224 09:25:39.037981 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a64bb802-2e95-48fe-9acf-af2ae240e5cf-var-log-ovn\") pod \"ovn-controller-kp7bj-config-xp8sz\" (UID: \"a64bb802-2e95-48fe-9acf-af2ae240e5cf\") " pod="openstack/ovn-controller-kp7bj-config-xp8sz" Feb 24 09:25:39 crc kubenswrapper[4822]: I0224 09:25:39.038042 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9bgw\" (UniqueName: \"kubernetes.io/projected/a64bb802-2e95-48fe-9acf-af2ae240e5cf-kube-api-access-j9bgw\") pod \"ovn-controller-kp7bj-config-xp8sz\" (UID: \"a64bb802-2e95-48fe-9acf-af2ae240e5cf\") " pod="openstack/ovn-controller-kp7bj-config-xp8sz" Feb 24 09:25:39 crc kubenswrapper[4822]: I0224 09:25:39.038126 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a64bb802-2e95-48fe-9acf-af2ae240e5cf-scripts\") pod \"ovn-controller-kp7bj-config-xp8sz\" (UID: \"a64bb802-2e95-48fe-9acf-af2ae240e5cf\") " pod="openstack/ovn-controller-kp7bj-config-xp8sz" Feb 24 09:25:39 crc kubenswrapper[4822]: I0224 09:25:39.038169 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a64bb802-2e95-48fe-9acf-af2ae240e5cf-var-run-ovn\") pod \"ovn-controller-kp7bj-config-xp8sz\" (UID: \"a64bb802-2e95-48fe-9acf-af2ae240e5cf\") " pod="openstack/ovn-controller-kp7bj-config-xp8sz" Feb 24 09:25:39 crc kubenswrapper[4822]: I0224 09:25:39.038613 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a64bb802-2e95-48fe-9acf-af2ae240e5cf-var-run-ovn\") pod \"ovn-controller-kp7bj-config-xp8sz\" (UID: \"a64bb802-2e95-48fe-9acf-af2ae240e5cf\") " pod="openstack/ovn-controller-kp7bj-config-xp8sz" Feb 24 09:25:39 crc kubenswrapper[4822]: I0224 09:25:39.039363 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a64bb802-2e95-48fe-9acf-af2ae240e5cf-var-run\") pod \"ovn-controller-kp7bj-config-xp8sz\" (UID: \"a64bb802-2e95-48fe-9acf-af2ae240e5cf\") " pod="openstack/ovn-controller-kp7bj-config-xp8sz" Feb 24 09:25:39 crc kubenswrapper[4822]: I0224 09:25:39.039859 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a64bb802-2e95-48fe-9acf-af2ae240e5cf-additional-scripts\") pod \"ovn-controller-kp7bj-config-xp8sz\" (UID: \"a64bb802-2e95-48fe-9acf-af2ae240e5cf\") " pod="openstack/ovn-controller-kp7bj-config-xp8sz" Feb 24 09:25:39 crc kubenswrapper[4822]: I0224 09:25:39.039205 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a64bb802-2e95-48fe-9acf-af2ae240e5cf-var-log-ovn\") pod \"ovn-controller-kp7bj-config-xp8sz\" (UID: \"a64bb802-2e95-48fe-9acf-af2ae240e5cf\") " pod="openstack/ovn-controller-kp7bj-config-xp8sz" Feb 24 09:25:39 crc kubenswrapper[4822]: I0224 09:25:39.042191 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a64bb802-2e95-48fe-9acf-af2ae240e5cf-scripts\") pod \"ovn-controller-kp7bj-config-xp8sz\" (UID: \"a64bb802-2e95-48fe-9acf-af2ae240e5cf\") " pod="openstack/ovn-controller-kp7bj-config-xp8sz" Feb 24 09:25:39 crc kubenswrapper[4822]: I0224 09:25:39.060927 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9bgw\" (UniqueName: \"kubernetes.io/projected/a64bb802-2e95-48fe-9acf-af2ae240e5cf-kube-api-access-j9bgw\") pod \"ovn-controller-kp7bj-config-xp8sz\" (UID: \"a64bb802-2e95-48fe-9acf-af2ae240e5cf\") " pod="openstack/ovn-controller-kp7bj-config-xp8sz" Feb 24 09:25:39 crc kubenswrapper[4822]: I0224 09:25:39.179926 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kp7bj-config-xp8sz" Feb 24 09:25:39 crc kubenswrapper[4822]: I0224 09:25:39.468605 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kp7bj-config-xp8sz"] Feb 24 09:25:39 crc kubenswrapper[4822]: I0224 09:25:39.969719 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 24 09:25:40 crc kubenswrapper[4822]: I0224 09:25:40.098970 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37566: no serving certificate available for the kubelet" Feb 24 09:25:40 crc kubenswrapper[4822]: I0224 09:25:40.185312 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37572: no serving certificate available for the kubelet" Feb 24 09:25:40 crc kubenswrapper[4822]: I0224 09:25:40.201814 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37578: no serving certificate available for the kubelet" Feb 24 09:25:40 crc kubenswrapper[4822]: I0224 09:25:40.246515 4822 generic.go:334] "Generic (PLEG): container finished" podID="a64bb802-2e95-48fe-9acf-af2ae240e5cf" containerID="8c6607ec8e67a57f506c5d65f53bdca202c1248639dda9ad10a3a0d404d68297" exitCode=0 Feb 24 09:25:40 crc kubenswrapper[4822]: I0224 09:25:40.246568 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kp7bj-config-xp8sz" event={"ID":"a64bb802-2e95-48fe-9acf-af2ae240e5cf","Type":"ContainerDied","Data":"8c6607ec8e67a57f506c5d65f53bdca202c1248639dda9ad10a3a0d404d68297"} Feb 24 09:25:40 crc kubenswrapper[4822]: I0224 09:25:40.246603 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kp7bj-config-xp8sz" event={"ID":"a64bb802-2e95-48fe-9acf-af2ae240e5cf","Type":"ContainerStarted","Data":"57434320b7fee252a67e572e93458e895c058a7c7f4df35beee5e3d9264ffb05"} Feb 24 09:25:40 crc kubenswrapper[4822]: I0224 09:25:40.250857 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37588: no serving certificate available for the kubelet" Feb 24 09:25:40 crc kubenswrapper[4822]: I0224 09:25:40.293332 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37604: no serving certificate available for the kubelet" Feb 24 09:25:40 crc kubenswrapper[4822]: I0224 09:25:40.333144 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37616: no serving certificate available for the kubelet" Feb 24 09:25:40 crc kubenswrapper[4822]: I0224 09:25:40.346928 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3" path="/var/lib/kubelet/pods/41d0684a-ad9d-4542-aec0-5ed4ff4ad1d3/volumes" Feb 24 09:25:40 crc kubenswrapper[4822]: I0224 09:25:40.392021 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37618: no serving certificate available for the kubelet" Feb 24 09:25:40 crc kubenswrapper[4822]: I0224 09:25:40.406745 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37632: no serving certificate available for the kubelet" Feb 24 09:25:40 crc kubenswrapper[4822]: I0224 09:25:40.469728 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37638: no serving certificate available for the kubelet" Feb 24 09:25:40 crc kubenswrapper[4822]: I0224 09:25:40.598208 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37648: no serving certificate available for the kubelet" Feb 24 09:25:40 crc kubenswrapper[4822]: I0224 09:25:40.714759 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37658: no serving certificate available for the kubelet" Feb 24 09:25:40 crc kubenswrapper[4822]: I0224 09:25:40.828844 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37660: no serving certificate available for the kubelet" Feb 24 09:25:41 crc kubenswrapper[4822]: I0224 09:25:41.424112 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39038: no serving certificate available for the kubelet" Feb 24 09:25:41 crc kubenswrapper[4822]: I0224 09:25:41.550051 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39046: no serving certificate available for the kubelet" Feb 24 09:25:41 crc kubenswrapper[4822]: I0224 09:25:41.659536 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kp7bj-config-xp8sz" Feb 24 09:25:41 crc kubenswrapper[4822]: I0224 09:25:41.703034 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a64bb802-2e95-48fe-9acf-af2ae240e5cf-var-run-ovn\") pod \"a64bb802-2e95-48fe-9acf-af2ae240e5cf\" (UID: \"a64bb802-2e95-48fe-9acf-af2ae240e5cf\") " Feb 24 09:25:41 crc kubenswrapper[4822]: I0224 09:25:41.703185 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a64bb802-2e95-48fe-9acf-af2ae240e5cf-scripts\") pod \"a64bb802-2e95-48fe-9acf-af2ae240e5cf\" (UID: \"a64bb802-2e95-48fe-9acf-af2ae240e5cf\") " Feb 24 09:25:41 crc kubenswrapper[4822]: I0224 09:25:41.703232 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a64bb802-2e95-48fe-9acf-af2ae240e5cf-additional-scripts\") pod \"a64bb802-2e95-48fe-9acf-af2ae240e5cf\" (UID: \"a64bb802-2e95-48fe-9acf-af2ae240e5cf\") " Feb 24 09:25:41 crc kubenswrapper[4822]: I0224 09:25:41.703256 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a64bb802-2e95-48fe-9acf-af2ae240e5cf-var-run\") pod \"a64bb802-2e95-48fe-9acf-af2ae240e5cf\" (UID: \"a64bb802-2e95-48fe-9acf-af2ae240e5cf\") " Feb 24 09:25:41 crc kubenswrapper[4822]: I0224 09:25:41.703276 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9bgw\" (UniqueName: \"kubernetes.io/projected/a64bb802-2e95-48fe-9acf-af2ae240e5cf-kube-api-access-j9bgw\") pod \"a64bb802-2e95-48fe-9acf-af2ae240e5cf\" (UID: \"a64bb802-2e95-48fe-9acf-af2ae240e5cf\") " Feb 24 09:25:41 crc kubenswrapper[4822]: I0224 09:25:41.703308 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a64bb802-2e95-48fe-9acf-af2ae240e5cf-var-log-ovn\") pod \"a64bb802-2e95-48fe-9acf-af2ae240e5cf\" (UID: \"a64bb802-2e95-48fe-9acf-af2ae240e5cf\") " Feb 24 09:25:41 crc kubenswrapper[4822]: I0224 09:25:41.703699 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a64bb802-2e95-48fe-9acf-af2ae240e5cf-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "a64bb802-2e95-48fe-9acf-af2ae240e5cf" (UID: "a64bb802-2e95-48fe-9acf-af2ae240e5cf"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:25:41 crc kubenswrapper[4822]: I0224 09:25:41.703737 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a64bb802-2e95-48fe-9acf-af2ae240e5cf-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "a64bb802-2e95-48fe-9acf-af2ae240e5cf" (UID: "a64bb802-2e95-48fe-9acf-af2ae240e5cf"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:25:41 crc kubenswrapper[4822]: I0224 09:25:41.704043 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a64bb802-2e95-48fe-9acf-af2ae240e5cf-var-run" (OuterVolumeSpecName: "var-run") pod "a64bb802-2e95-48fe-9acf-af2ae240e5cf" (UID: "a64bb802-2e95-48fe-9acf-af2ae240e5cf"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:25:41 crc kubenswrapper[4822]: I0224 09:25:41.704620 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a64bb802-2e95-48fe-9acf-af2ae240e5cf-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "a64bb802-2e95-48fe-9acf-af2ae240e5cf" (UID: "a64bb802-2e95-48fe-9acf-af2ae240e5cf"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:25:41 crc kubenswrapper[4822]: I0224 09:25:41.704824 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a64bb802-2e95-48fe-9acf-af2ae240e5cf-scripts" (OuterVolumeSpecName: "scripts") pod "a64bb802-2e95-48fe-9acf-af2ae240e5cf" (UID: "a64bb802-2e95-48fe-9acf-af2ae240e5cf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:25:41 crc kubenswrapper[4822]: I0224 09:25:41.709460 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a64bb802-2e95-48fe-9acf-af2ae240e5cf-kube-api-access-j9bgw" (OuterVolumeSpecName: "kube-api-access-j9bgw") pod "a64bb802-2e95-48fe-9acf-af2ae240e5cf" (UID: "a64bb802-2e95-48fe-9acf-af2ae240e5cf"). InnerVolumeSpecName "kube-api-access-j9bgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:25:41 crc kubenswrapper[4822]: I0224 09:25:41.805380 4822 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a64bb802-2e95-48fe-9acf-af2ae240e5cf-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:41 crc kubenswrapper[4822]: I0224 09:25:41.805439 4822 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a64bb802-2e95-48fe-9acf-af2ae240e5cf-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:41 crc kubenswrapper[4822]: I0224 09:25:41.805458 4822 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a64bb802-2e95-48fe-9acf-af2ae240e5cf-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:41 crc kubenswrapper[4822]: I0224 09:25:41.805474 4822 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a64bb802-2e95-48fe-9acf-af2ae240e5cf-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:41 crc kubenswrapper[4822]: I0224 09:25:41.805498 4822 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a64bb802-2e95-48fe-9acf-af2ae240e5cf-var-run\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:41 crc kubenswrapper[4822]: I0224 09:25:41.805521 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9bgw\" (UniqueName: \"kubernetes.io/projected/a64bb802-2e95-48fe-9acf-af2ae240e5cf-kube-api-access-j9bgw\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:41 crc kubenswrapper[4822]: I0224 09:25:41.978253 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39050: no serving certificate available for the kubelet" Feb 24 09:25:42 crc kubenswrapper[4822]: I0224 09:25:42.266428 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kp7bj-config-xp8sz" event={"ID":"a64bb802-2e95-48fe-9acf-af2ae240e5cf","Type":"ContainerDied","Data":"57434320b7fee252a67e572e93458e895c058a7c7f4df35beee5e3d9264ffb05"} Feb 24 09:25:42 crc kubenswrapper[4822]: I0224 09:25:42.266481 4822 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57434320b7fee252a67e572e93458e895c058a7c7f4df35beee5e3d9264ffb05" Feb 24 09:25:42 crc kubenswrapper[4822]: I0224 09:25:42.266557 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kp7bj-config-xp8sz" Feb 24 09:25:42 crc kubenswrapper[4822]: E0224 09:25:42.387295 4822 certificate_manager.go:579] "Unhandled Error" err="kubernetes.io/kubelet-serving: certificate request was not signed: timed out waiting for the condition" logger="UnhandledError" Feb 24 09:25:42 crc kubenswrapper[4822]: I0224 09:25:42.741266 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-kp7bj-config-xp8sz"] Feb 24 09:25:42 crc kubenswrapper[4822]: I0224 09:25:42.753024 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-kp7bj-config-xp8sz"] Feb 24 09:25:42 crc kubenswrapper[4822]: I0224 09:25:42.866956 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d5b6d6b67-mflb6" Feb 24 09:25:42 crc kubenswrapper[4822]: I0224 09:25:42.919719 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39062: no serving certificate available for the kubelet" Feb 24 09:25:42 crc kubenswrapper[4822]: I0224 09:25:42.942087 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-mz7t2"] Feb 24 09:25:42 crc kubenswrapper[4822]: I0224 09:25:42.942498 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-mz7t2" podUID="00667fc1-3e89-475c-84eb-b89736a8d50e" containerName="dnsmasq-dns" containerID="cri-o://06f06b125bae00f1d68ba91e47bb7a451f291230f68bf1716738ae5bd99a1c5e" gracePeriod=10 Feb 24 09:25:43 crc kubenswrapper[4822]: I0224 09:25:43.290849 4822 generic.go:334] "Generic (PLEG): container finished" podID="00667fc1-3e89-475c-84eb-b89736a8d50e" containerID="06f06b125bae00f1d68ba91e47bb7a451f291230f68bf1716738ae5bd99a1c5e" exitCode=0 Feb 24 09:25:43 crc kubenswrapper[4822]: I0224 09:25:43.291082 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-mz7t2" event={"ID":"00667fc1-3e89-475c-84eb-b89736a8d50e","Type":"ContainerDied","Data":"06f06b125bae00f1d68ba91e47bb7a451f291230f68bf1716738ae5bd99a1c5e"} Feb 24 09:25:43 crc kubenswrapper[4822]: I0224 09:25:43.412308 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-mz7t2" Feb 24 09:25:43 crc kubenswrapper[4822]: I0224 09:25:43.435957 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fm6nh\" (UniqueName: \"kubernetes.io/projected/00667fc1-3e89-475c-84eb-b89736a8d50e-kube-api-access-fm6nh\") pod \"00667fc1-3e89-475c-84eb-b89736a8d50e\" (UID: \"00667fc1-3e89-475c-84eb-b89736a8d50e\") " Feb 24 09:25:43 crc kubenswrapper[4822]: I0224 09:25:43.436051 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00667fc1-3e89-475c-84eb-b89736a8d50e-config\") pod \"00667fc1-3e89-475c-84eb-b89736a8d50e\" (UID: \"00667fc1-3e89-475c-84eb-b89736a8d50e\") " Feb 24 09:25:43 crc kubenswrapper[4822]: I0224 09:25:43.436084 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00667fc1-3e89-475c-84eb-b89736a8d50e-ovsdbserver-nb\") pod \"00667fc1-3e89-475c-84eb-b89736a8d50e\" (UID: \"00667fc1-3e89-475c-84eb-b89736a8d50e\") " Feb 24 09:25:43 crc kubenswrapper[4822]: I0224 09:25:43.436122 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00667fc1-3e89-475c-84eb-b89736a8d50e-ovsdbserver-sb\") pod \"00667fc1-3e89-475c-84eb-b89736a8d50e\" (UID: \"00667fc1-3e89-475c-84eb-b89736a8d50e\") " Feb 24 09:25:43 crc kubenswrapper[4822]: I0224 09:25:43.436299 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00667fc1-3e89-475c-84eb-b89736a8d50e-dns-svc\") pod \"00667fc1-3e89-475c-84eb-b89736a8d50e\" (UID: \"00667fc1-3e89-475c-84eb-b89736a8d50e\") " Feb 24 09:25:43 crc kubenswrapper[4822]: I0224 09:25:43.436347 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39074: no serving certificate available for the kubelet" Feb 24 09:25:43 crc kubenswrapper[4822]: I0224 09:25:43.442689 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00667fc1-3e89-475c-84eb-b89736a8d50e-kube-api-access-fm6nh" (OuterVolumeSpecName: "kube-api-access-fm6nh") pod "00667fc1-3e89-475c-84eb-b89736a8d50e" (UID: "00667fc1-3e89-475c-84eb-b89736a8d50e"). InnerVolumeSpecName "kube-api-access-fm6nh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:25:43 crc kubenswrapper[4822]: I0224 09:25:43.494601 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00667fc1-3e89-475c-84eb-b89736a8d50e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "00667fc1-3e89-475c-84eb-b89736a8d50e" (UID: "00667fc1-3e89-475c-84eb-b89736a8d50e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:25:43 crc kubenswrapper[4822]: I0224 09:25:43.504120 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00667fc1-3e89-475c-84eb-b89736a8d50e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "00667fc1-3e89-475c-84eb-b89736a8d50e" (UID: "00667fc1-3e89-475c-84eb-b89736a8d50e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:25:43 crc kubenswrapper[4822]: I0224 09:25:43.507569 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00667fc1-3e89-475c-84eb-b89736a8d50e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "00667fc1-3e89-475c-84eb-b89736a8d50e" (UID: "00667fc1-3e89-475c-84eb-b89736a8d50e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:25:43 crc kubenswrapper[4822]: I0224 09:25:43.509571 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00667fc1-3e89-475c-84eb-b89736a8d50e-config" (OuterVolumeSpecName: "config") pod "00667fc1-3e89-475c-84eb-b89736a8d50e" (UID: "00667fc1-3e89-475c-84eb-b89736a8d50e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:25:43 crc kubenswrapper[4822]: I0224 09:25:43.538074 4822 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00667fc1-3e89-475c-84eb-b89736a8d50e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:43 crc kubenswrapper[4822]: I0224 09:25:43.538102 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fm6nh\" (UniqueName: \"kubernetes.io/projected/00667fc1-3e89-475c-84eb-b89736a8d50e-kube-api-access-fm6nh\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:43 crc kubenswrapper[4822]: I0224 09:25:43.538113 4822 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00667fc1-3e89-475c-84eb-b89736a8d50e-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:43 crc kubenswrapper[4822]: I0224 09:25:43.538121 4822 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00667fc1-3e89-475c-84eb-b89736a8d50e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:43 crc kubenswrapper[4822]: I0224 09:25:43.538129 4822 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00667fc1-3e89-475c-84eb-b89736a8d50e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:44 crc kubenswrapper[4822]: I0224 09:25:44.300867 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-mz7t2" event={"ID":"00667fc1-3e89-475c-84eb-b89736a8d50e","Type":"ContainerDied","Data":"e84db52c7477ee7dde6a73d0586c123b689423afea388bbac75a6a8f1ab30f38"} Feb 24 09:25:44 crc kubenswrapper[4822]: I0224 09:25:44.300965 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-mz7t2" Feb 24 09:25:44 crc kubenswrapper[4822]: I0224 09:25:44.301213 4822 scope.go:117] "RemoveContainer" containerID="06f06b125bae00f1d68ba91e47bb7a451f291230f68bf1716738ae5bd99a1c5e" Feb 24 09:25:44 crc kubenswrapper[4822]: I0224 09:25:44.343280 4822 scope.go:117] "RemoveContainer" containerID="0d5003ade6ad1a1ea7b54351b7d3d7d896d1baa6586ceceae0d85541c2b2bae8" Feb 24 09:25:44 crc kubenswrapper[4822]: I0224 09:25:44.360212 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a64bb802-2e95-48fe-9acf-af2ae240e5cf" path="/var/lib/kubelet/pods/a64bb802-2e95-48fe-9acf-af2ae240e5cf/volumes" Feb 24 09:25:44 crc kubenswrapper[4822]: I0224 09:25:44.361340 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-mz7t2"] Feb 24 09:25:44 crc kubenswrapper[4822]: I0224 09:25:44.361662 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-mz7t2"] Feb 24 09:25:44 crc kubenswrapper[4822]: I0224 09:25:44.518720 4822 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 24 09:25:44 crc kubenswrapper[4822]: I0224 09:25:44.530108 4822 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 24 09:25:44 crc kubenswrapper[4822]: I0224 09:25:44.554455 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39084: no serving certificate available for the kubelet" Feb 24 09:25:44 crc kubenswrapper[4822]: I0224 09:25:44.586769 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39088: no serving certificate available for the kubelet" Feb 24 09:25:44 crc kubenswrapper[4822]: I0224 09:25:44.630644 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39092: no serving certificate available for the kubelet" Feb 24 09:25:44 crc kubenswrapper[4822]: I0224 09:25:44.693008 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39106: no serving certificate available for the kubelet" Feb 24 09:25:44 crc kubenswrapper[4822]: I0224 09:25:44.793329 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39108: no serving certificate available for the kubelet" Feb 24 09:25:44 crc kubenswrapper[4822]: I0224 09:25:44.901941 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39112: no serving certificate available for the kubelet" Feb 24 09:25:45 crc kubenswrapper[4822]: I0224 09:25:45.024542 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39122: no serving certificate available for the kubelet" Feb 24 09:25:45 crc kubenswrapper[4822]: I0224 09:25:45.097278 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39124: no serving certificate available for the kubelet" Feb 24 09:25:45 crc kubenswrapper[4822]: I0224 09:25:45.446246 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39128: no serving certificate available for the kubelet" Feb 24 09:25:45 crc kubenswrapper[4822]: I0224 09:25:45.568403 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39142: no serving certificate available for the kubelet" Feb 24 09:25:46 crc kubenswrapper[4822]: I0224 09:25:46.130480 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39156: no serving certificate available for the kubelet" Feb 24 09:25:46 crc kubenswrapper[4822]: I0224 09:25:46.362812 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00667fc1-3e89-475c-84eb-b89736a8d50e" path="/var/lib/kubelet/pods/00667fc1-3e89-475c-84eb-b89736a8d50e/volumes" Feb 24 09:25:46 crc kubenswrapper[4822]: I0224 09:25:46.498582 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39158: no serving certificate available for the kubelet" Feb 24 09:25:47 crc kubenswrapper[4822]: I0224 09:25:47.444297 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39166: no serving certificate available for the kubelet" Feb 24 09:25:48 crc kubenswrapper[4822]: I0224 09:25:48.078525 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39182: no serving certificate available for the kubelet" Feb 24 09:25:49 crc kubenswrapper[4822]: I0224 09:25:49.557001 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39194: no serving certificate available for the kubelet" Feb 24 09:25:50 crc kubenswrapper[4822]: I0224 09:25:50.042966 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39208: no serving certificate available for the kubelet" Feb 24 09:25:50 crc kubenswrapper[4822]: I0224 09:25:50.780128 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39214: no serving certificate available for the kubelet" Feb 24 09:25:51 crc kubenswrapper[4822]: I0224 09:25:51.135663 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44446: no serving certificate available for the kubelet" Feb 24 09:25:51 crc kubenswrapper[4822]: I0224 09:25:51.902819 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44456: no serving certificate available for the kubelet" Feb 24 09:25:52 crc kubenswrapper[4822]: I0224 09:25:52.611539 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44468: no serving certificate available for the kubelet" Feb 24 09:25:54 crc kubenswrapper[4822]: I0224 09:25:54.175902 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44470: no serving certificate available for the kubelet" Feb 24 09:25:55 crc kubenswrapper[4822]: I0224 09:25:55.222791 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44482: no serving certificate available for the kubelet" Feb 24 09:25:55 crc kubenswrapper[4822]: I0224 09:25:55.673357 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44484: no serving certificate available for the kubelet" Feb 24 09:25:57 crc kubenswrapper[4822]: I0224 09:25:57.240746 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44488: no serving certificate available for the kubelet" Feb 24 09:25:58 crc kubenswrapper[4822]: I0224 09:25:58.742847 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44500: no serving certificate available for the kubelet" Feb 24 09:26:00 crc kubenswrapper[4822]: I0224 09:26:00.280413 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44508: no serving certificate available for the kubelet" Feb 24 09:26:01 crc kubenswrapper[4822]: I0224 09:26:01.126795 4822 ???:1] "http: TLS handshake error from 192.168.126.11:60298: no serving certificate available for the kubelet" Feb 24 09:26:01 crc kubenswrapper[4822]: I0224 09:26:01.793750 4822 ???:1] "http: TLS handshake error from 192.168.126.11:60308: no serving certificate available for the kubelet" Feb 24 09:26:03 crc kubenswrapper[4822]: I0224 09:26:03.359238 4822 ???:1] "http: TLS handshake error from 192.168.126.11:60312: no serving certificate available for the kubelet" Feb 24 09:26:04 crc kubenswrapper[4822]: I0224 09:26:04.840574 4822 ???:1] "http: TLS handshake error from 192.168.126.11:60320: no serving certificate available for the kubelet" Feb 24 09:26:05 crc kubenswrapper[4822]: I0224 09:26:05.508293 4822 ???:1] "http: TLS handshake error from 192.168.126.11:60328: no serving certificate available for the kubelet" Feb 24 09:26:06 crc kubenswrapper[4822]: I0224 09:26:06.421386 4822 ???:1] "http: TLS handshake error from 192.168.126.11:60338: no serving certificate available for the kubelet" Feb 24 09:26:07 crc kubenswrapper[4822]: I0224 09:26:07.896204 4822 ???:1] "http: TLS handshake error from 192.168.126.11:60344: no serving certificate available for the kubelet" Feb 24 09:26:09 crc kubenswrapper[4822]: I0224 09:26:09.477106 4822 ???:1] "http: TLS handshake error from 192.168.126.11:60346: no serving certificate available for the kubelet" Feb 24 09:26:10 crc kubenswrapper[4822]: I0224 09:26:10.998628 4822 ???:1] "http: TLS handshake error from 192.168.126.11:60358: no serving certificate available for the kubelet" Feb 24 09:26:12 crc kubenswrapper[4822]: I0224 09:26:12.510409 4822 ???:1] "http: TLS handshake error from 192.168.126.11:55810: no serving certificate available for the kubelet" Feb 24 09:26:12 crc kubenswrapper[4822]: I0224 09:26:12.519689 4822 ???:1] "http: TLS handshake error from 192.168.126.11:55818: no serving certificate available for the kubelet" Feb 24 09:26:14 crc kubenswrapper[4822]: I0224 09:26:14.043698 4822 ???:1] "http: TLS handshake error from 192.168.126.11:55822: no serving certificate available for the kubelet" Feb 24 09:26:15 crc kubenswrapper[4822]: I0224 09:26:15.572145 4822 ???:1] "http: TLS handshake error from 192.168.126.11:55826: no serving certificate available for the kubelet" Feb 24 09:26:17 crc kubenswrapper[4822]: I0224 09:26:17.092992 4822 ???:1] "http: TLS handshake error from 192.168.126.11:55842: no serving certificate available for the kubelet" Feb 24 09:26:18 crc kubenswrapper[4822]: I0224 09:26:18.626806 4822 ???:1] "http: TLS handshake error from 192.168.126.11:55846: no serving certificate available for the kubelet" Feb 24 09:26:20 crc kubenswrapper[4822]: I0224 09:26:20.150762 4822 ???:1] "http: TLS handshake error from 192.168.126.11:55852: no serving certificate available for the kubelet" Feb 24 09:26:21 crc kubenswrapper[4822]: I0224 09:26:21.682003 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58308: no serving certificate available for the kubelet" Feb 24 09:26:21 crc kubenswrapper[4822]: I0224 09:26:21.715760 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58318: no serving certificate available for the kubelet" Feb 24 09:26:23 crc kubenswrapper[4822]: I0224 09:26:23.207016 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58324: no serving certificate available for the kubelet" Feb 24 09:26:24 crc kubenswrapper[4822]: I0224 09:26:24.745551 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58338: no serving certificate available for the kubelet" Feb 24 09:26:26 crc kubenswrapper[4822]: I0224 09:26:26.025848 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58350: no serving certificate available for the kubelet" Feb 24 09:26:26 crc kubenswrapper[4822]: I0224 09:26:26.264776 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58356: no serving certificate available for the kubelet" Feb 24 09:26:27 crc kubenswrapper[4822]: I0224 09:26:27.801623 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58358: no serving certificate available for the kubelet" Feb 24 09:26:29 crc kubenswrapper[4822]: I0224 09:26:29.313162 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58370: no serving certificate available for the kubelet" Feb 24 09:26:30 crc kubenswrapper[4822]: I0224 09:26:30.852780 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58384: no serving certificate available for the kubelet" Feb 24 09:26:32 crc kubenswrapper[4822]: I0224 09:26:32.381773 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50030: no serving certificate available for the kubelet" Feb 24 09:26:33 crc kubenswrapper[4822]: I0224 09:26:33.911567 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50032: no serving certificate available for the kubelet" Feb 24 09:26:35 crc kubenswrapper[4822]: I0224 09:26:35.433108 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50034: no serving certificate available for the kubelet" Feb 24 09:26:36 crc kubenswrapper[4822]: I0224 09:26:36.970404 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50038: no serving certificate available for the kubelet" Feb 24 09:26:38 crc kubenswrapper[4822]: I0224 09:26:38.493016 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50046: no serving certificate available for the kubelet" Feb 24 09:26:40 crc kubenswrapper[4822]: I0224 09:26:40.031567 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50050: no serving certificate available for the kubelet" Feb 24 09:26:41 crc kubenswrapper[4822]: I0224 09:26:41.547195 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57498: no serving certificate available for the kubelet" Feb 24 09:26:43 crc kubenswrapper[4822]: I0224 09:26:43.091260 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57514: no serving certificate available for the kubelet" Feb 24 09:26:44 crc kubenswrapper[4822]: I0224 09:26:44.603426 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57526: no serving certificate available for the kubelet" Feb 24 09:26:46 crc kubenswrapper[4822]: I0224 09:26:46.139621 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57528: no serving certificate available for the kubelet" Feb 24 09:26:47 crc kubenswrapper[4822]: I0224 09:26:47.645766 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57534: no serving certificate available for the kubelet" Feb 24 09:26:49 crc kubenswrapper[4822]: I0224 09:26:49.184742 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57544: no serving certificate available for the kubelet" Feb 24 09:26:50 crc kubenswrapper[4822]: I0224 09:26:50.698812 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57558: no serving certificate available for the kubelet" Feb 24 09:26:52 crc kubenswrapper[4822]: I0224 09:26:52.242715 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54664: no serving certificate available for the kubelet" Feb 24 09:26:53 crc kubenswrapper[4822]: I0224 09:26:53.699047 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54670: no serving certificate available for the kubelet" Feb 24 09:26:53 crc kubenswrapper[4822]: I0224 09:26:53.757288 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54680: no serving certificate available for the kubelet" Feb 24 09:26:55 crc kubenswrapper[4822]: I0224 09:26:55.297857 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54694: no serving certificate available for the kubelet" Feb 24 09:26:56 crc kubenswrapper[4822]: I0224 09:26:56.841054 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54704: no serving certificate available for the kubelet" Feb 24 09:26:58 crc kubenswrapper[4822]: I0224 09:26:58.366147 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54706: no serving certificate available for the kubelet" Feb 24 09:26:59 crc kubenswrapper[4822]: I0224 09:26:59.900557 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54720: no serving certificate available for the kubelet" Feb 24 09:27:01 crc kubenswrapper[4822]: I0224 09:27:01.433641 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40124: no serving certificate available for the kubelet" Feb 24 09:27:02 crc kubenswrapper[4822]: I0224 09:27:02.822099 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40132: no serving certificate available for the kubelet" Feb 24 09:27:02 crc kubenswrapper[4822]: I0224 09:27:02.960579 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40144: no serving certificate available for the kubelet" Feb 24 09:27:04 crc kubenswrapper[4822]: I0224 09:27:04.491708 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40160: no serving certificate available for the kubelet" Feb 24 09:27:06 crc kubenswrapper[4822]: I0224 09:27:06.012780 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40162: no serving certificate available for the kubelet" Feb 24 09:27:07 crc kubenswrapper[4822]: I0224 09:27:07.028434 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40170: no serving certificate available for the kubelet" Feb 24 09:27:07 crc kubenswrapper[4822]: I0224 09:27:07.592281 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40172: no serving certificate available for the kubelet" Feb 24 09:27:09 crc kubenswrapper[4822]: I0224 09:27:09.071670 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40174: no serving certificate available for the kubelet" Feb 24 09:27:10 crc kubenswrapper[4822]: I0224 09:27:10.634829 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40186: no serving certificate available for the kubelet" Feb 24 09:27:12 crc kubenswrapper[4822]: I0224 09:27:12.128158 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34590: no serving certificate available for the kubelet" Feb 24 09:27:13 crc kubenswrapper[4822]: I0224 09:27:13.697958 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34600: no serving certificate available for the kubelet" Feb 24 09:27:15 crc kubenswrapper[4822]: I0224 09:27:15.176803 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34612: no serving certificate available for the kubelet" Feb 24 09:27:15 crc kubenswrapper[4822]: I0224 09:27:15.677103 4822 patch_prober.go:28] interesting pod/machine-config-daemon-qd752 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:27:15 crc kubenswrapper[4822]: I0224 09:27:15.677564 4822 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:27:16 crc kubenswrapper[4822]: I0224 09:27:16.746627 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34626: no serving certificate available for the kubelet" Feb 24 09:27:18 crc kubenswrapper[4822]: I0224 09:27:18.234776 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34636: no serving certificate available for the kubelet" Feb 24 09:27:19 crc kubenswrapper[4822]: I0224 09:27:19.802489 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34640: no serving certificate available for the kubelet" Feb 24 09:27:21 crc kubenswrapper[4822]: I0224 09:27:21.295100 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39806: no serving certificate available for the kubelet" Feb 24 09:27:22 crc kubenswrapper[4822]: I0224 09:27:22.855178 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39810: no serving certificate available for the kubelet" Feb 24 09:27:24 crc kubenswrapper[4822]: I0224 09:27:24.351545 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39824: no serving certificate available for the kubelet" Feb 24 09:27:25 crc kubenswrapper[4822]: I0224 09:27:25.905741 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39834: no serving certificate available for the kubelet" Feb 24 09:27:27 crc kubenswrapper[4822]: I0224 09:27:27.408102 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39848: no serving certificate available for the kubelet" Feb 24 09:27:28 crc kubenswrapper[4822]: I0224 09:27:28.969647 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39860: no serving certificate available for the kubelet" Feb 24 09:27:30 crc kubenswrapper[4822]: I0224 09:27:30.465414 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39868: no serving certificate available for the kubelet" Feb 24 09:27:32 crc kubenswrapper[4822]: I0224 09:27:32.027264 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59088: no serving certificate available for the kubelet" Feb 24 09:27:33 crc kubenswrapper[4822]: I0224 09:27:33.525426 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59090: no serving certificate available for the kubelet" Feb 24 09:27:35 crc kubenswrapper[4822]: I0224 09:27:35.098890 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59104: no serving certificate available for the kubelet" Feb 24 09:27:36 crc kubenswrapper[4822]: I0224 09:27:36.585592 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59120: no serving certificate available for the kubelet" Feb 24 09:27:38 crc kubenswrapper[4822]: I0224 09:27:38.161187 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59128: no serving certificate available for the kubelet" Feb 24 09:27:39 crc kubenswrapper[4822]: I0224 09:27:39.647241 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59132: no serving certificate available for the kubelet" Feb 24 09:27:41 crc kubenswrapper[4822]: I0224 09:27:41.213441 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42256: no serving certificate available for the kubelet" Feb 24 09:27:42 crc kubenswrapper[4822]: I0224 09:27:42.705394 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42264: no serving certificate available for the kubelet" Feb 24 09:27:44 crc kubenswrapper[4822]: I0224 09:27:44.273436 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42272: no serving certificate available for the kubelet" Feb 24 09:27:45 crc kubenswrapper[4822]: I0224 09:27:45.676494 4822 patch_prober.go:28] interesting pod/machine-config-daemon-qd752 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:27:45 crc kubenswrapper[4822]: I0224 09:27:45.677027 4822 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:27:45 crc kubenswrapper[4822]: I0224 09:27:45.764363 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42280: no serving certificate available for the kubelet" Feb 24 09:27:47 crc kubenswrapper[4822]: I0224 09:27:47.324957 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42282: no serving certificate available for the kubelet" Feb 24 09:27:48 crc kubenswrapper[4822]: I0224 09:27:48.820647 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42292: no serving certificate available for the kubelet" Feb 24 09:27:50 crc kubenswrapper[4822]: I0224 09:27:50.406593 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42304: no serving certificate available for the kubelet" Feb 24 09:27:51 crc kubenswrapper[4822]: I0224 09:27:51.885149 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53388: no serving certificate available for the kubelet" Feb 24 09:27:53 crc kubenswrapper[4822]: I0224 09:27:53.459794 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53396: no serving certificate available for the kubelet" Feb 24 09:27:54 crc kubenswrapper[4822]: I0224 09:27:54.941053 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53402: no serving certificate available for the kubelet" Feb 24 09:27:56 crc kubenswrapper[4822]: I0224 09:27:56.520522 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53412: no serving certificate available for the kubelet" Feb 24 09:27:57 crc kubenswrapper[4822]: I0224 09:27:57.996708 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53414: no serving certificate available for the kubelet" Feb 24 09:27:59 crc kubenswrapper[4822]: I0224 09:27:59.577524 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53426: no serving certificate available for the kubelet" Feb 24 09:28:01 crc kubenswrapper[4822]: I0224 09:28:01.046718 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53438: no serving certificate available for the kubelet" Feb 24 09:28:02 crc kubenswrapper[4822]: I0224 09:28:02.629795 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33856: no serving certificate available for the kubelet" Feb 24 09:28:04 crc kubenswrapper[4822]: I0224 09:28:04.108533 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33860: no serving certificate available for the kubelet" Feb 24 09:28:05 crc kubenswrapper[4822]: I0224 09:28:05.688453 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33874: no serving certificate available for the kubelet" Feb 24 09:28:07 crc kubenswrapper[4822]: I0224 09:28:07.155851 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33880: no serving certificate available for the kubelet" Feb 24 09:28:08 crc kubenswrapper[4822]: I0224 09:28:08.754048 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33890: no serving certificate available for the kubelet" Feb 24 09:28:10 crc kubenswrapper[4822]: I0224 09:28:10.209097 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33904: no serving certificate available for the kubelet" Feb 24 09:28:11 crc kubenswrapper[4822]: I0224 09:28:11.822635 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42046: no serving certificate available for the kubelet" Feb 24 09:28:13 crc kubenswrapper[4822]: I0224 09:28:13.263133 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42054: no serving certificate available for the kubelet" Feb 24 09:28:14 crc kubenswrapper[4822]: I0224 09:28:14.887043 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42060: no serving certificate available for the kubelet" Feb 24 09:28:15 crc kubenswrapper[4822]: I0224 09:28:15.677094 4822 patch_prober.go:28] interesting pod/machine-config-daemon-qd752 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:28:15 crc kubenswrapper[4822]: I0224 09:28:15.677507 4822 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:28:15 crc kubenswrapper[4822]: I0224 09:28:15.677571 4822 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd752" Feb 24 09:28:15 crc kubenswrapper[4822]: I0224 09:28:15.678418 4822 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"54c4238ba3e15211ebc3dfc64c33fca8f6ffee22714455185e3ed60742e4b1d3"} pod="openshift-machine-config-operator/machine-config-daemon-qd752" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 09:28:15 crc kubenswrapper[4822]: I0224 09:28:15.678524 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" containerID="cri-o://54c4238ba3e15211ebc3dfc64c33fca8f6ffee22714455185e3ed60742e4b1d3" gracePeriod=600 Feb 24 09:28:15 crc kubenswrapper[4822]: I0224 09:28:15.727685 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42070: no serving certificate available for the kubelet" Feb 24 09:28:15 crc kubenswrapper[4822]: I0224 09:28:15.843184 4822 generic.go:334] "Generic (PLEG): container finished" podID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerID="54c4238ba3e15211ebc3dfc64c33fca8f6ffee22714455185e3ed60742e4b1d3" exitCode=0 Feb 24 09:28:15 crc kubenswrapper[4822]: I0224 09:28:15.843252 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" event={"ID":"306aba52-0b6e-4d3f-b05f-757daebc5e24","Type":"ContainerDied","Data":"54c4238ba3e15211ebc3dfc64c33fca8f6ffee22714455185e3ed60742e4b1d3"} Feb 24 09:28:15 crc kubenswrapper[4822]: I0224 09:28:15.843309 4822 scope.go:117] "RemoveContainer" containerID="9ed7a6b939504e2a46d5adbbb7de5c06f8baf234d28be557aa5a9d58954f225c" Feb 24 09:28:16 crc kubenswrapper[4822]: I0224 09:28:16.314433 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42086: no serving certificate available for the kubelet" Feb 24 09:28:16 crc kubenswrapper[4822]: I0224 09:28:16.861021 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" event={"ID":"306aba52-0b6e-4d3f-b05f-757daebc5e24","Type":"ContainerStarted","Data":"432ab20c8bdd0b5429b1e771a9c702bb2c3c1068e7afa6a38fb33a2c4a67c545"} Feb 24 09:28:18 crc kubenswrapper[4822]: I0224 09:28:18.029575 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42090: no serving certificate available for the kubelet" Feb 24 09:28:19 crc kubenswrapper[4822]: I0224 09:28:19.373360 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42100: no serving certificate available for the kubelet" Feb 24 09:28:21 crc kubenswrapper[4822]: I0224 09:28:21.088671 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45828: no serving certificate available for the kubelet" Feb 24 09:28:22 crc kubenswrapper[4822]: I0224 09:28:22.430645 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45840: no serving certificate available for the kubelet" Feb 24 09:28:24 crc kubenswrapper[4822]: I0224 09:28:24.147981 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45844: no serving certificate available for the kubelet" Feb 24 09:28:24 crc kubenswrapper[4822]: I0224 09:28:24.918879 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45856: no serving certificate available for the kubelet" Feb 24 09:28:25 crc kubenswrapper[4822]: I0224 09:28:25.488469 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45868: no serving certificate available for the kubelet" Feb 24 09:28:27 crc kubenswrapper[4822]: I0224 09:28:27.203154 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45872: no serving certificate available for the kubelet" Feb 24 09:28:28 crc kubenswrapper[4822]: I0224 09:28:28.549241 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45888: no serving certificate available for the kubelet" Feb 24 09:28:28 crc kubenswrapper[4822]: I0224 09:28:28.997165 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45898: no serving certificate available for the kubelet" Feb 24 09:28:30 crc kubenswrapper[4822]: I0224 09:28:30.263383 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45914: no serving certificate available for the kubelet" Feb 24 09:28:31 crc kubenswrapper[4822]: I0224 09:28:31.596683 4822 ???:1] "http: TLS handshake error from 192.168.126.11:60788: no serving certificate available for the kubelet" Feb 24 09:28:33 crc kubenswrapper[4822]: I0224 09:28:33.321354 4822 ???:1] "http: TLS handshake error from 192.168.126.11:60804: no serving certificate available for the kubelet" Feb 24 09:28:34 crc kubenswrapper[4822]: I0224 09:28:34.654361 4822 ???:1] "http: TLS handshake error from 192.168.126.11:60806: no serving certificate available for the kubelet" Feb 24 09:28:36 crc kubenswrapper[4822]: I0224 09:28:36.839782 4822 ???:1] "http: TLS handshake error from 192.168.126.11:60818: no serving certificate available for the kubelet" Feb 24 09:28:37 crc kubenswrapper[4822]: I0224 09:28:37.712899 4822 ???:1] "http: TLS handshake error from 192.168.126.11:60828: no serving certificate available for the kubelet" Feb 24 09:28:39 crc kubenswrapper[4822]: I0224 09:28:39.908355 4822 ???:1] "http: TLS handshake error from 192.168.126.11:60838: no serving certificate available for the kubelet" Feb 24 09:28:40 crc kubenswrapper[4822]: I0224 09:28:40.769159 4822 ???:1] "http: TLS handshake error from 192.168.126.11:60850: no serving certificate available for the kubelet" Feb 24 09:28:42 crc kubenswrapper[4822]: I0224 09:28:42.962176 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53184: no serving certificate available for the kubelet" Feb 24 09:28:43 crc kubenswrapper[4822]: I0224 09:28:43.813835 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53196: no serving certificate available for the kubelet" Feb 24 09:28:46 crc kubenswrapper[4822]: I0224 09:28:46.021432 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53210: no serving certificate available for the kubelet" Feb 24 09:28:46 crc kubenswrapper[4822]: I0224 09:28:46.904292 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53212: no serving certificate available for the kubelet" Feb 24 09:28:49 crc kubenswrapper[4822]: I0224 09:28:49.078261 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53224: no serving certificate available for the kubelet" Feb 24 09:28:49 crc kubenswrapper[4822]: I0224 09:28:49.955805 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53240: no serving certificate available for the kubelet" Feb 24 09:28:52 crc kubenswrapper[4822]: I0224 09:28:52.140831 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35010: no serving certificate available for the kubelet" Feb 24 09:28:53 crc kubenswrapper[4822]: I0224 09:28:53.017068 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35024: no serving certificate available for the kubelet" Feb 24 09:28:55 crc kubenswrapper[4822]: I0224 09:28:55.202815 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35032: no serving certificate available for the kubelet" Feb 24 09:28:56 crc kubenswrapper[4822]: I0224 09:28:56.090588 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35048: no serving certificate available for the kubelet" Feb 24 09:28:58 crc kubenswrapper[4822]: I0224 09:28:58.259556 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35058: no serving certificate available for the kubelet" Feb 24 09:28:59 crc kubenswrapper[4822]: I0224 09:28:59.136813 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35062: no serving certificate available for the kubelet" Feb 24 09:29:01 crc kubenswrapper[4822]: I0224 09:29:01.307744 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40242: no serving certificate available for the kubelet" Feb 24 09:29:02 crc kubenswrapper[4822]: I0224 09:29:02.200976 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40246: no serving certificate available for the kubelet" Feb 24 09:29:04 crc kubenswrapper[4822]: I0224 09:29:04.372499 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40256: no serving certificate available for the kubelet" Feb 24 09:29:04 crc kubenswrapper[4822]: I0224 09:29:04.685216 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/openstack-cell1-galera-0" podUID="3ff049ae-9abb-4477-9f51-eee7228cedfd" containerName="galera" probeResult="failure" output=< Feb 24 09:29:04 crc kubenswrapper[4822]: waiting for gcomm URI Feb 24 09:29:04 crc kubenswrapper[4822]: > Feb 24 09:29:04 crc kubenswrapper[4822]: I0224 09:29:04.685343 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 24 09:29:04 crc kubenswrapper[4822]: I0224 09:29:04.686171 4822 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"33c66df3d646804c90bd8a401cc8cc3214693c58479f3e9432107f6727c5c8bc"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed startup probe, will be restarted" Feb 24 09:29:04 crc kubenswrapper[4822]: I0224 09:29:04.764825 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="3ff049ae-9abb-4477-9f51-eee7228cedfd" containerName="galera" containerID="cri-o://33c66df3d646804c90bd8a401cc8cc3214693c58479f3e9432107f6727c5c8bc" gracePeriod=30 Feb 24 09:29:05 crc kubenswrapper[4822]: I0224 09:29:05.260341 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40270: no serving certificate available for the kubelet" Feb 24 09:29:05 crc kubenswrapper[4822]: I0224 09:29:05.361461 4822 generic.go:334] "Generic (PLEG): container finished" podID="3ff049ae-9abb-4477-9f51-eee7228cedfd" containerID="33c66df3d646804c90bd8a401cc8cc3214693c58479f3e9432107f6727c5c8bc" exitCode=143 Feb 24 09:29:05 crc kubenswrapper[4822]: I0224 09:29:05.361508 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3ff049ae-9abb-4477-9f51-eee7228cedfd","Type":"ContainerDied","Data":"33c66df3d646804c90bd8a401cc8cc3214693c58479f3e9432107f6727c5c8bc"} Feb 24 09:29:05 crc kubenswrapper[4822]: I0224 09:29:05.361538 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3ff049ae-9abb-4477-9f51-eee7228cedfd","Type":"ContainerStarted","Data":"7f5e57d03ffaa11b105be338eb80203fd926e436bc685e02891070b18b62f1f4"} Feb 24 09:29:07 crc kubenswrapper[4822]: I0224 09:29:07.409212 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40278: no serving certificate available for the kubelet" Feb 24 09:29:08 crc kubenswrapper[4822]: I0224 09:29:08.320765 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40280: no serving certificate available for the kubelet" Feb 24 09:29:10 crc kubenswrapper[4822]: I0224 09:29:10.466055 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40296: no serving certificate available for the kubelet" Feb 24 09:29:11 crc kubenswrapper[4822]: I0224 09:29:11.212773 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/openstack-galera-0" podUID="92cdd045-d60c-433d-b2e3-32f93299ee8e" containerName="galera" probeResult="failure" output=< Feb 24 09:29:11 crc kubenswrapper[4822]: waiting for gcomm URI Feb 24 09:29:11 crc kubenswrapper[4822]: > Feb 24 09:29:11 crc kubenswrapper[4822]: I0224 09:29:11.213200 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 24 09:29:11 crc kubenswrapper[4822]: I0224 09:29:11.214114 4822 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"85b53d5a5f9684bb70bb78e089b925297df077dac3839b5245ea55f59723a5ea"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed startup probe, will be restarted" Feb 24 09:29:11 crc kubenswrapper[4822]: I0224 09:29:11.293588 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="92cdd045-d60c-433d-b2e3-32f93299ee8e" containerName="galera" containerID="cri-o://85b53d5a5f9684bb70bb78e089b925297df077dac3839b5245ea55f59723a5ea" gracePeriod=30 Feb 24 09:29:11 crc kubenswrapper[4822]: I0224 09:29:11.385894 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35760: no serving certificate available for the kubelet" Feb 24 09:29:12 crc kubenswrapper[4822]: I0224 09:29:12.428990 4822 generic.go:334] "Generic (PLEG): container finished" podID="92cdd045-d60c-433d-b2e3-32f93299ee8e" containerID="85b53d5a5f9684bb70bb78e089b925297df077dac3839b5245ea55f59723a5ea" exitCode=143 Feb 24 09:29:12 crc kubenswrapper[4822]: I0224 09:29:12.429125 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"92cdd045-d60c-433d-b2e3-32f93299ee8e","Type":"ContainerDied","Data":"85b53d5a5f9684bb70bb78e089b925297df077dac3839b5245ea55f59723a5ea"} Feb 24 09:29:12 crc kubenswrapper[4822]: I0224 09:29:12.429372 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"92cdd045-d60c-433d-b2e3-32f93299ee8e","Type":"ContainerStarted","Data":"715ebd280f2edaae16b304c9710c830e20238fa9e80562afe51d830a9ee205d9"} Feb 24 09:29:13 crc kubenswrapper[4822]: I0224 09:29:13.057491 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 24 09:29:13 crc kubenswrapper[4822]: I0224 09:29:13.057554 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 24 09:29:13 crc kubenswrapper[4822]: I0224 09:29:13.512195 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35772: no serving certificate available for the kubelet" Feb 24 09:29:14 crc kubenswrapper[4822]: I0224 09:29:14.444354 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35774: no serving certificate available for the kubelet" Feb 24 09:29:16 crc kubenswrapper[4822]: I0224 09:29:16.602581 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35782: no serving certificate available for the kubelet" Feb 24 09:29:17 crc kubenswrapper[4822]: I0224 09:29:17.488947 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35786: no serving certificate available for the kubelet" Feb 24 09:29:19 crc kubenswrapper[4822]: I0224 09:29:19.645226 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35792: no serving certificate available for the kubelet" Feb 24 09:29:20 crc kubenswrapper[4822]: I0224 09:29:20.539304 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35794: no serving certificate available for the kubelet" Feb 24 09:29:21 crc kubenswrapper[4822]: I0224 09:29:21.616832 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 24 09:29:21 crc kubenswrapper[4822]: I0224 09:29:21.616896 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 24 09:29:22 crc kubenswrapper[4822]: I0224 09:29:22.702001 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57256: no serving certificate available for the kubelet" Feb 24 09:29:23 crc kubenswrapper[4822]: I0224 09:29:23.600882 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57262: no serving certificate available for the kubelet" Feb 24 09:29:25 crc kubenswrapper[4822]: I0224 09:29:25.763267 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57270: no serving certificate available for the kubelet" Feb 24 09:29:26 crc kubenswrapper[4822]: I0224 09:29:26.657644 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57274: no serving certificate available for the kubelet" Feb 24 09:29:28 crc kubenswrapper[4822]: I0224 09:29:28.814511 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57290: no serving certificate available for the kubelet" Feb 24 09:29:29 crc kubenswrapper[4822]: I0224 09:29:29.721727 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57292: no serving certificate available for the kubelet" Feb 24 09:29:31 crc kubenswrapper[4822]: I0224 09:29:31.882955 4822 ???:1] "http: TLS handshake error from 192.168.126.11:56926: no serving certificate available for the kubelet" Feb 24 09:29:32 crc kubenswrapper[4822]: I0224 09:29:32.771495 4822 ???:1] "http: TLS handshake error from 192.168.126.11:56940: no serving certificate available for the kubelet" Feb 24 09:29:34 crc kubenswrapper[4822]: I0224 09:29:34.936598 4822 ???:1] "http: TLS handshake error from 192.168.126.11:56954: no serving certificate available for the kubelet" Feb 24 09:29:35 crc kubenswrapper[4822]: I0224 09:29:35.824772 4822 ???:1] "http: TLS handshake error from 192.168.126.11:56958: no serving certificate available for the kubelet" Feb 24 09:29:37 crc kubenswrapper[4822]: I0224 09:29:37.984123 4822 ???:1] "http: TLS handshake error from 192.168.126.11:56960: no serving certificate available for the kubelet" Feb 24 09:29:38 crc kubenswrapper[4822]: I0224 09:29:38.880288 4822 ???:1] "http: TLS handshake error from 192.168.126.11:56964: no serving certificate available for the kubelet" Feb 24 09:29:41 crc kubenswrapper[4822]: I0224 09:29:41.042472 4822 ???:1] "http: TLS handshake error from 192.168.126.11:56978: no serving certificate available for the kubelet" Feb 24 09:29:41 crc kubenswrapper[4822]: I0224 09:29:41.938381 4822 ???:1] "http: TLS handshake error from 192.168.126.11:48656: no serving certificate available for the kubelet" Feb 24 09:29:44 crc kubenswrapper[4822]: I0224 09:29:44.086157 4822 ???:1] "http: TLS handshake error from 192.168.126.11:48664: no serving certificate available for the kubelet" Feb 24 09:29:45 crc kubenswrapper[4822]: I0224 09:29:45.002706 4822 ???:1] "http: TLS handshake error from 192.168.126.11:48678: no serving certificate available for the kubelet" Feb 24 09:29:47 crc kubenswrapper[4822]: I0224 09:29:47.188489 4822 ???:1] "http: TLS handshake error from 192.168.126.11:48684: no serving certificate available for the kubelet" Feb 24 09:29:48 crc kubenswrapper[4822]: I0224 09:29:48.060397 4822 ???:1] "http: TLS handshake error from 192.168.126.11:48700: no serving certificate available for the kubelet" Feb 24 09:29:50 crc kubenswrapper[4822]: I0224 09:29:50.243224 4822 ???:1] "http: TLS handshake error from 192.168.126.11:48712: no serving certificate available for the kubelet" Feb 24 09:29:51 crc kubenswrapper[4822]: I0224 09:29:51.119522 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34498: no serving certificate available for the kubelet" Feb 24 09:29:53 crc kubenswrapper[4822]: I0224 09:29:53.338155 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34506: no serving certificate available for the kubelet" Feb 24 09:29:54 crc kubenswrapper[4822]: I0224 09:29:54.189588 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34520: no serving certificate available for the kubelet" Feb 24 09:29:56 crc kubenswrapper[4822]: I0224 09:29:56.388198 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34522: no serving certificate available for the kubelet" Feb 24 09:29:57 crc kubenswrapper[4822]: I0224 09:29:57.256021 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34532: no serving certificate available for the kubelet" Feb 24 09:29:59 crc kubenswrapper[4822]: I0224 09:29:59.429150 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34542: no serving certificate available for the kubelet" Feb 24 09:30:00 crc kubenswrapper[4822]: I0224 09:30:00.166710 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532090-gw94r"] Feb 24 09:30:00 crc kubenswrapper[4822]: E0224 09:30:00.167406 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a64bb802-2e95-48fe-9acf-af2ae240e5cf" containerName="ovn-config" Feb 24 09:30:00 crc kubenswrapper[4822]: I0224 09:30:00.167527 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="a64bb802-2e95-48fe-9acf-af2ae240e5cf" containerName="ovn-config" Feb 24 09:30:00 crc kubenswrapper[4822]: E0224 09:30:00.167638 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00667fc1-3e89-475c-84eb-b89736a8d50e" containerName="dnsmasq-dns" Feb 24 09:30:00 crc kubenswrapper[4822]: I0224 09:30:00.167749 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="00667fc1-3e89-475c-84eb-b89736a8d50e" containerName="dnsmasq-dns" Feb 24 09:30:00 crc kubenswrapper[4822]: E0224 09:30:00.167851 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00667fc1-3e89-475c-84eb-b89736a8d50e" containerName="init" Feb 24 09:30:00 crc kubenswrapper[4822]: I0224 09:30:00.168046 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="00667fc1-3e89-475c-84eb-b89736a8d50e" containerName="init" Feb 24 09:30:00 crc kubenswrapper[4822]: I0224 09:30:00.168478 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="00667fc1-3e89-475c-84eb-b89736a8d50e" containerName="dnsmasq-dns" Feb 24 09:30:00 crc kubenswrapper[4822]: I0224 09:30:00.168623 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="a64bb802-2e95-48fe-9acf-af2ae240e5cf" containerName="ovn-config" Feb 24 09:30:00 crc kubenswrapper[4822]: I0224 09:30:00.169530 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532090-gw94r" Feb 24 09:30:00 crc kubenswrapper[4822]: I0224 09:30:00.172668 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 24 09:30:00 crc kubenswrapper[4822]: I0224 09:30:00.173201 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 24 09:30:00 crc kubenswrapper[4822]: I0224 09:30:00.187094 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532090-gw94r"] Feb 24 09:30:00 crc kubenswrapper[4822]: I0224 09:30:00.263568 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5d4e1118-756b-4a31-9e15-6693440650f5-secret-volume\") pod \"collect-profiles-29532090-gw94r\" (UID: \"5d4e1118-756b-4a31-9e15-6693440650f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532090-gw94r" Feb 24 09:30:00 crc kubenswrapper[4822]: I0224 09:30:00.263635 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d4e1118-756b-4a31-9e15-6693440650f5-config-volume\") pod \"collect-profiles-29532090-gw94r\" (UID: \"5d4e1118-756b-4a31-9e15-6693440650f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532090-gw94r" Feb 24 09:30:00 crc kubenswrapper[4822]: I0224 09:30:00.263836 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwv8c\" (UniqueName: \"kubernetes.io/projected/5d4e1118-756b-4a31-9e15-6693440650f5-kube-api-access-cwv8c\") pod \"collect-profiles-29532090-gw94r\" (UID: \"5d4e1118-756b-4a31-9e15-6693440650f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532090-gw94r" Feb 24 09:30:00 crc kubenswrapper[4822]: I0224 09:30:00.300751 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34558: no serving certificate available for the kubelet" Feb 24 09:30:00 crc kubenswrapper[4822]: I0224 09:30:00.365355 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwv8c\" (UniqueName: \"kubernetes.io/projected/5d4e1118-756b-4a31-9e15-6693440650f5-kube-api-access-cwv8c\") pod \"collect-profiles-29532090-gw94r\" (UID: \"5d4e1118-756b-4a31-9e15-6693440650f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532090-gw94r" Feb 24 09:30:00 crc kubenswrapper[4822]: I0224 09:30:00.365524 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5d4e1118-756b-4a31-9e15-6693440650f5-secret-volume\") pod \"collect-profiles-29532090-gw94r\" (UID: \"5d4e1118-756b-4a31-9e15-6693440650f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532090-gw94r" Feb 24 09:30:00 crc kubenswrapper[4822]: I0224 09:30:00.365576 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d4e1118-756b-4a31-9e15-6693440650f5-config-volume\") pod \"collect-profiles-29532090-gw94r\" (UID: \"5d4e1118-756b-4a31-9e15-6693440650f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532090-gw94r" Feb 24 09:30:00 crc kubenswrapper[4822]: I0224 09:30:00.366879 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d4e1118-756b-4a31-9e15-6693440650f5-config-volume\") pod \"collect-profiles-29532090-gw94r\" (UID: \"5d4e1118-756b-4a31-9e15-6693440650f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532090-gw94r" Feb 24 09:30:00 crc kubenswrapper[4822]: I0224 09:30:00.375520 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5d4e1118-756b-4a31-9e15-6693440650f5-secret-volume\") pod \"collect-profiles-29532090-gw94r\" (UID: \"5d4e1118-756b-4a31-9e15-6693440650f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532090-gw94r" Feb 24 09:30:00 crc kubenswrapper[4822]: I0224 09:30:00.385823 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwv8c\" (UniqueName: \"kubernetes.io/projected/5d4e1118-756b-4a31-9e15-6693440650f5-kube-api-access-cwv8c\") pod \"collect-profiles-29532090-gw94r\" (UID: \"5d4e1118-756b-4a31-9e15-6693440650f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532090-gw94r" Feb 24 09:30:00 crc kubenswrapper[4822]: I0224 09:30:00.516598 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532090-gw94r" Feb 24 09:30:01 crc kubenswrapper[4822]: I0224 09:30:01.016530 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532090-gw94r"] Feb 24 09:30:01 crc kubenswrapper[4822]: I0224 09:30:01.917700 4822 generic.go:334] "Generic (PLEG): container finished" podID="5d4e1118-756b-4a31-9e15-6693440650f5" containerID="8f3dd8bc68131581571b77a2753db53c08b6c5c788994adb86ebee581dbb6997" exitCode=0 Feb 24 09:30:01 crc kubenswrapper[4822]: I0224 09:30:01.917836 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532090-gw94r" event={"ID":"5d4e1118-756b-4a31-9e15-6693440650f5","Type":"ContainerDied","Data":"8f3dd8bc68131581571b77a2753db53c08b6c5c788994adb86ebee581dbb6997"} Feb 24 09:30:01 crc kubenswrapper[4822]: I0224 09:30:01.918195 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532090-gw94r" event={"ID":"5d4e1118-756b-4a31-9e15-6693440650f5","Type":"ContainerStarted","Data":"79153331375e1ac704cf75f4816fea41cf1fae0337e6ff30776066fc10b29ea6"} Feb 24 09:30:02 crc kubenswrapper[4822]: I0224 09:30:02.488373 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33188: no serving certificate available for the kubelet" Feb 24 09:30:03 crc kubenswrapper[4822]: I0224 09:30:03.348546 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532090-gw94r" Feb 24 09:30:03 crc kubenswrapper[4822]: I0224 09:30:03.353331 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33196: no serving certificate available for the kubelet" Feb 24 09:30:03 crc kubenswrapper[4822]: I0224 09:30:03.434443 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwv8c\" (UniqueName: \"kubernetes.io/projected/5d4e1118-756b-4a31-9e15-6693440650f5-kube-api-access-cwv8c\") pod \"5d4e1118-756b-4a31-9e15-6693440650f5\" (UID: \"5d4e1118-756b-4a31-9e15-6693440650f5\") " Feb 24 09:30:03 crc kubenswrapper[4822]: I0224 09:30:03.434520 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5d4e1118-756b-4a31-9e15-6693440650f5-secret-volume\") pod \"5d4e1118-756b-4a31-9e15-6693440650f5\" (UID: \"5d4e1118-756b-4a31-9e15-6693440650f5\") " Feb 24 09:30:03 crc kubenswrapper[4822]: I0224 09:30:03.434664 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d4e1118-756b-4a31-9e15-6693440650f5-config-volume\") pod \"5d4e1118-756b-4a31-9e15-6693440650f5\" (UID: \"5d4e1118-756b-4a31-9e15-6693440650f5\") " Feb 24 09:30:03 crc kubenswrapper[4822]: I0224 09:30:03.435439 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d4e1118-756b-4a31-9e15-6693440650f5-config-volume" (OuterVolumeSpecName: "config-volume") pod "5d4e1118-756b-4a31-9e15-6693440650f5" (UID: "5d4e1118-756b-4a31-9e15-6693440650f5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:30:03 crc kubenswrapper[4822]: I0224 09:30:03.442926 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d4e1118-756b-4a31-9e15-6693440650f5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5d4e1118-756b-4a31-9e15-6693440650f5" (UID: "5d4e1118-756b-4a31-9e15-6693440650f5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:30:03 crc kubenswrapper[4822]: I0224 09:30:03.443114 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d4e1118-756b-4a31-9e15-6693440650f5-kube-api-access-cwv8c" (OuterVolumeSpecName: "kube-api-access-cwv8c") pod "5d4e1118-756b-4a31-9e15-6693440650f5" (UID: "5d4e1118-756b-4a31-9e15-6693440650f5"). InnerVolumeSpecName "kube-api-access-cwv8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:30:03 crc kubenswrapper[4822]: I0224 09:30:03.536852 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwv8c\" (UniqueName: \"kubernetes.io/projected/5d4e1118-756b-4a31-9e15-6693440650f5-kube-api-access-cwv8c\") on node \"crc\" DevicePath \"\"" Feb 24 09:30:03 crc kubenswrapper[4822]: I0224 09:30:03.536888 4822 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5d4e1118-756b-4a31-9e15-6693440650f5-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 24 09:30:03 crc kubenswrapper[4822]: I0224 09:30:03.536901 4822 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d4e1118-756b-4a31-9e15-6693440650f5-config-volume\") on node \"crc\" DevicePath \"\"" Feb 24 09:30:03 crc kubenswrapper[4822]: I0224 09:30:03.963329 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532090-gw94r" event={"ID":"5d4e1118-756b-4a31-9e15-6693440650f5","Type":"ContainerDied","Data":"79153331375e1ac704cf75f4816fea41cf1fae0337e6ff30776066fc10b29ea6"} Feb 24 09:30:03 crc kubenswrapper[4822]: I0224 09:30:03.963686 4822 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79153331375e1ac704cf75f4816fea41cf1fae0337e6ff30776066fc10b29ea6" Feb 24 09:30:03 crc kubenswrapper[4822]: I0224 09:30:03.963404 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532090-gw94r" Feb 24 09:30:05 crc kubenswrapper[4822]: I0224 09:30:05.546895 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33204: no serving certificate available for the kubelet" Feb 24 09:30:06 crc kubenswrapper[4822]: I0224 09:30:06.434983 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33212: no serving certificate available for the kubelet" Feb 24 09:30:08 crc kubenswrapper[4822]: I0224 09:30:08.611180 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33226: no serving certificate available for the kubelet" Feb 24 09:30:09 crc kubenswrapper[4822]: I0224 09:30:09.489652 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33234: no serving certificate available for the kubelet" Feb 24 09:30:11 crc kubenswrapper[4822]: I0224 09:30:11.668355 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57136: no serving certificate available for the kubelet" Feb 24 09:30:12 crc kubenswrapper[4822]: I0224 09:30:12.543768 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57138: no serving certificate available for the kubelet" Feb 24 09:30:14 crc kubenswrapper[4822]: I0224 09:30:14.722975 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57148: no serving certificate available for the kubelet" Feb 24 09:30:14 crc kubenswrapper[4822]: I0224 09:30:14.799227 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57158: no serving certificate available for the kubelet" Feb 24 09:30:15 crc kubenswrapper[4822]: I0224 09:30:15.628342 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57174: no serving certificate available for the kubelet" Feb 24 09:30:17 crc kubenswrapper[4822]: I0224 09:30:17.762866 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57190: no serving certificate available for the kubelet" Feb 24 09:30:18 crc kubenswrapper[4822]: I0224 09:30:18.685205 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57204: no serving certificate available for the kubelet" Feb 24 09:30:20 crc kubenswrapper[4822]: I0224 09:30:20.823087 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57208: no serving certificate available for the kubelet" Feb 24 09:30:21 crc kubenswrapper[4822]: I0224 09:30:21.737318 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44514: no serving certificate available for the kubelet" Feb 24 09:30:23 crc kubenswrapper[4822]: I0224 09:30:23.885312 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44522: no serving certificate available for the kubelet" Feb 24 09:30:24 crc kubenswrapper[4822]: I0224 09:30:24.797431 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44534: no serving certificate available for the kubelet" Feb 24 09:30:26 crc kubenswrapper[4822]: I0224 09:30:26.936277 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44544: no serving certificate available for the kubelet" Feb 24 09:30:27 crc kubenswrapper[4822]: I0224 09:30:27.901180 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44554: no serving certificate available for the kubelet" Feb 24 09:30:30 crc kubenswrapper[4822]: I0224 09:30:30.003443 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44564: no serving certificate available for the kubelet" Feb 24 09:30:30 crc kubenswrapper[4822]: I0224 09:30:30.957864 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44568: no serving certificate available for the kubelet" Feb 24 09:30:33 crc kubenswrapper[4822]: I0224 09:30:33.059193 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46524: no serving certificate available for the kubelet" Feb 24 09:30:34 crc kubenswrapper[4822]: I0224 09:30:34.017679 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46540: no serving certificate available for the kubelet" Feb 24 09:30:36 crc kubenswrapper[4822]: I0224 09:30:36.121390 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46544: no serving certificate available for the kubelet" Feb 24 09:30:37 crc kubenswrapper[4822]: I0224 09:30:37.075177 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46560: no serving certificate available for the kubelet" Feb 24 09:30:39 crc kubenswrapper[4822]: I0224 09:30:39.179401 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46574: no serving certificate available for the kubelet" Feb 24 09:30:40 crc kubenswrapper[4822]: I0224 09:30:40.129189 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46576: no serving certificate available for the kubelet" Feb 24 09:30:42 crc kubenswrapper[4822]: I0224 09:30:42.299026 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43154: no serving certificate available for the kubelet" Feb 24 09:30:43 crc kubenswrapper[4822]: I0224 09:30:43.186268 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43166: no serving certificate available for the kubelet" Feb 24 09:30:45 crc kubenswrapper[4822]: I0224 09:30:45.348146 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43172: no serving certificate available for the kubelet" Feb 24 09:30:45 crc kubenswrapper[4822]: I0224 09:30:45.676747 4822 patch_prober.go:28] interesting pod/machine-config-daemon-qd752 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:30:45 crc kubenswrapper[4822]: I0224 09:30:45.676826 4822 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:30:46 crc kubenswrapper[4822]: I0224 09:30:46.244855 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43174: no serving certificate available for the kubelet" Feb 24 09:30:48 crc kubenswrapper[4822]: I0224 09:30:48.409484 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43176: no serving certificate available for the kubelet" Feb 24 09:30:49 crc kubenswrapper[4822]: I0224 09:30:49.292472 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43188: no serving certificate available for the kubelet" Feb 24 09:30:51 crc kubenswrapper[4822]: I0224 09:30:51.480764 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43016: no serving certificate available for the kubelet" Feb 24 09:30:52 crc kubenswrapper[4822]: I0224 09:30:52.346457 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43022: no serving certificate available for the kubelet" Feb 24 09:30:54 crc kubenswrapper[4822]: I0224 09:30:54.537746 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43036: no serving certificate available for the kubelet" Feb 24 09:30:55 crc kubenswrapper[4822]: I0224 09:30:55.404431 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43044: no serving certificate available for the kubelet" Feb 24 09:30:57 crc kubenswrapper[4822]: I0224 09:30:57.621387 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43046: no serving certificate available for the kubelet" Feb 24 09:30:58 crc kubenswrapper[4822]: I0224 09:30:58.463071 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43052: no serving certificate available for the kubelet" Feb 24 09:30:59 crc kubenswrapper[4822]: I0224 09:30:59.669338 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43058: no serving certificate available for the kubelet" Feb 24 09:31:00 crc kubenswrapper[4822]: I0224 09:31:00.668976 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43062: no serving certificate available for the kubelet" Feb 24 09:31:01 crc kubenswrapper[4822]: I0224 09:31:01.530043 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50168: no serving certificate available for the kubelet" Feb 24 09:31:03 crc kubenswrapper[4822]: I0224 09:31:03.740578 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50172: no serving certificate available for the kubelet" Feb 24 09:31:04 crc kubenswrapper[4822]: I0224 09:31:04.589020 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50174: no serving certificate available for the kubelet" Feb 24 09:31:06 crc kubenswrapper[4822]: I0224 09:31:06.794956 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50180: no serving certificate available for the kubelet" Feb 24 09:31:07 crc kubenswrapper[4822]: I0224 09:31:07.646570 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50192: no serving certificate available for the kubelet" Feb 24 09:31:08 crc kubenswrapper[4822]: I0224 09:31:08.915851 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50198: no serving certificate available for the kubelet" Feb 24 09:31:09 crc kubenswrapper[4822]: I0224 09:31:09.838074 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50204: no serving certificate available for the kubelet" Feb 24 09:31:10 crc kubenswrapper[4822]: I0224 09:31:10.728347 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50218: no serving certificate available for the kubelet" Feb 24 09:31:12 crc kubenswrapper[4822]: I0224 09:31:12.890625 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39302: no serving certificate available for the kubelet" Feb 24 09:31:12 crc kubenswrapper[4822]: I0224 09:31:12.899145 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39304: no serving certificate available for the kubelet" Feb 24 09:31:13 crc kubenswrapper[4822]: I0224 09:31:13.781255 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39316: no serving certificate available for the kubelet" Feb 24 09:31:15 crc kubenswrapper[4822]: I0224 09:31:15.676979 4822 patch_prober.go:28] interesting pod/machine-config-daemon-qd752 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:31:15 crc kubenswrapper[4822]: I0224 09:31:15.677326 4822 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:31:15 crc kubenswrapper[4822]: I0224 09:31:15.960718 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39322: no serving certificate available for the kubelet" Feb 24 09:31:16 crc kubenswrapper[4822]: I0224 09:31:16.837389 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39324: no serving certificate available for the kubelet" Feb 24 09:31:19 crc kubenswrapper[4822]: I0224 09:31:19.015015 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39334: no serving certificate available for the kubelet" Feb 24 09:31:19 crc kubenswrapper[4822]: I0224 09:31:19.893639 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39348: no serving certificate available for the kubelet" Feb 24 09:31:22 crc kubenswrapper[4822]: I0224 09:31:22.069100 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58708: no serving certificate available for the kubelet" Feb 24 09:31:22 crc kubenswrapper[4822]: I0224 09:31:22.947562 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58712: no serving certificate available for the kubelet" Feb 24 09:31:25 crc kubenswrapper[4822]: I0224 09:31:25.212061 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58720: no serving certificate available for the kubelet" Feb 24 09:31:26 crc kubenswrapper[4822]: I0224 09:31:26.004223 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58736: no serving certificate available for the kubelet" Feb 24 09:31:28 crc kubenswrapper[4822]: I0224 09:31:28.278515 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58746: no serving certificate available for the kubelet" Feb 24 09:31:29 crc kubenswrapper[4822]: I0224 09:31:29.057589 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58754: no serving certificate available for the kubelet" Feb 24 09:31:31 crc kubenswrapper[4822]: I0224 09:31:31.358185 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45756: no serving certificate available for the kubelet" Feb 24 09:31:32 crc kubenswrapper[4822]: I0224 09:31:32.127682 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45768: no serving certificate available for the kubelet" Feb 24 09:31:34 crc kubenswrapper[4822]: I0224 09:31:34.407186 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45772: no serving certificate available for the kubelet" Feb 24 09:31:35 crc kubenswrapper[4822]: I0224 09:31:35.180453 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45774: no serving certificate available for the kubelet" Feb 24 09:31:37 crc kubenswrapper[4822]: I0224 09:31:37.463007 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45782: no serving certificate available for the kubelet" Feb 24 09:31:38 crc kubenswrapper[4822]: I0224 09:31:38.238908 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45784: no serving certificate available for the kubelet" Feb 24 09:31:40 crc kubenswrapper[4822]: I0224 09:31:40.528119 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45788: no serving certificate available for the kubelet" Feb 24 09:31:41 crc kubenswrapper[4822]: I0224 09:31:41.274362 4822 ???:1] "http: TLS handshake error from 192.168.126.11:51712: no serving certificate available for the kubelet" Feb 24 09:31:43 crc kubenswrapper[4822]: I0224 09:31:43.582800 4822 ???:1] "http: TLS handshake error from 192.168.126.11:51714: no serving certificate available for the kubelet" Feb 24 09:31:44 crc kubenswrapper[4822]: I0224 09:31:44.332293 4822 ???:1] "http: TLS handshake error from 192.168.126.11:51722: no serving certificate available for the kubelet" Feb 24 09:31:45 crc kubenswrapper[4822]: I0224 09:31:45.676779 4822 patch_prober.go:28] interesting pod/machine-config-daemon-qd752 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:31:45 crc kubenswrapper[4822]: I0224 09:31:45.676866 4822 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:31:45 crc kubenswrapper[4822]: I0224 09:31:45.676955 4822 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd752" Feb 24 09:31:45 crc kubenswrapper[4822]: I0224 09:31:45.678020 4822 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"432ab20c8bdd0b5429b1e771a9c702bb2c3c1068e7afa6a38fb33a2c4a67c545"} pod="openshift-machine-config-operator/machine-config-daemon-qd752" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 09:31:45 crc kubenswrapper[4822]: I0224 09:31:45.678564 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" containerID="cri-o://432ab20c8bdd0b5429b1e771a9c702bb2c3c1068e7afa6a38fb33a2c4a67c545" gracePeriod=600 Feb 24 09:31:46 crc kubenswrapper[4822]: I0224 09:31:46.034799 4822 generic.go:334] "Generic (PLEG): container finished" podID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerID="432ab20c8bdd0b5429b1e771a9c702bb2c3c1068e7afa6a38fb33a2c4a67c545" exitCode=0 Feb 24 09:31:46 crc kubenswrapper[4822]: I0224 09:31:46.034864 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" event={"ID":"306aba52-0b6e-4d3f-b05f-757daebc5e24","Type":"ContainerDied","Data":"432ab20c8bdd0b5429b1e771a9c702bb2c3c1068e7afa6a38fb33a2c4a67c545"} Feb 24 09:31:46 crc kubenswrapper[4822]: I0224 09:31:46.034954 4822 scope.go:117] "RemoveContainer" containerID="54c4238ba3e15211ebc3dfc64c33fca8f6ffee22714455185e3ed60742e4b1d3" Feb 24 09:31:46 crc kubenswrapper[4822]: I0224 09:31:46.636775 4822 ???:1] "http: TLS handshake error from 192.168.126.11:51730: no serving certificate available for the kubelet" Feb 24 09:31:47 crc kubenswrapper[4822]: I0224 09:31:47.050976 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" event={"ID":"306aba52-0b6e-4d3f-b05f-757daebc5e24","Type":"ContainerStarted","Data":"0a783a49ee9100320d8d40e3dd712347665993b0a5bef661196e07267b06a180"} Feb 24 09:31:47 crc kubenswrapper[4822]: I0224 09:31:47.375483 4822 ???:1] "http: TLS handshake error from 192.168.126.11:51742: no serving certificate available for the kubelet" Feb 24 09:31:49 crc kubenswrapper[4822]: I0224 09:31:49.693272 4822 ???:1] "http: TLS handshake error from 192.168.126.11:51758: no serving certificate available for the kubelet" Feb 24 09:31:50 crc kubenswrapper[4822]: I0224 09:31:50.426406 4822 ???:1] "http: TLS handshake error from 192.168.126.11:51762: no serving certificate available for the kubelet" Feb 24 09:31:52 crc kubenswrapper[4822]: I0224 09:31:52.754134 4822 ???:1] "http: TLS handshake error from 192.168.126.11:51262: no serving certificate available for the kubelet" Feb 24 09:31:53 crc kubenswrapper[4822]: I0224 09:31:53.479787 4822 ???:1] "http: TLS handshake error from 192.168.126.11:51274: no serving certificate available for the kubelet" Feb 24 09:31:55 crc kubenswrapper[4822]: I0224 09:31:55.815804 4822 ???:1] "http: TLS handshake error from 192.168.126.11:51284: no serving certificate available for the kubelet" Feb 24 09:31:56 crc kubenswrapper[4822]: I0224 09:31:56.535150 4822 ???:1] "http: TLS handshake error from 192.168.126.11:51300: no serving certificate available for the kubelet" Feb 24 09:31:58 crc kubenswrapper[4822]: I0224 09:31:58.870329 4822 ???:1] "http: TLS handshake error from 192.168.126.11:51310: no serving certificate available for the kubelet" Feb 24 09:31:59 crc kubenswrapper[4822]: I0224 09:31:59.597357 4822 ???:1] "http: TLS handshake error from 192.168.126.11:51316: no serving certificate available for the kubelet" Feb 24 09:32:01 crc kubenswrapper[4822]: I0224 09:32:01.916012 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49084: no serving certificate available for the kubelet" Feb 24 09:32:02 crc kubenswrapper[4822]: I0224 09:32:02.648399 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49092: no serving certificate available for the kubelet" Feb 24 09:32:02 crc kubenswrapper[4822]: I0224 09:32:02.901718 4822 scope.go:117] "RemoveContainer" containerID="8c6607ec8e67a57f506c5d65f53bdca202c1248639dda9ad10a3a0d404d68297" Feb 24 09:32:02 crc kubenswrapper[4822]: I0224 09:32:02.951294 4822 scope.go:117] "RemoveContainer" containerID="aa3634220747716586fdb2fde4c55ded2bd2a5dc0fb2f2d2a9491a88e1b9eb5b" Feb 24 09:32:03 crc kubenswrapper[4822]: I0224 09:32:03.003119 4822 scope.go:117] "RemoveContainer" containerID="c001883de7072e1111fe9960a44fc70ad21f1d36c876265ef963eb7a534ef3de" Feb 24 09:32:04 crc kubenswrapper[4822]: I0224 09:32:04.975420 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49108: no serving certificate available for the kubelet" Feb 24 09:32:05 crc kubenswrapper[4822]: I0224 09:32:05.708509 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49122: no serving certificate available for the kubelet" Feb 24 09:32:08 crc kubenswrapper[4822]: I0224 09:32:08.034705 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49130: no serving certificate available for the kubelet" Feb 24 09:32:08 crc kubenswrapper[4822]: I0224 09:32:08.818544 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49136: no serving certificate available for the kubelet" Feb 24 09:32:11 crc kubenswrapper[4822]: I0224 09:32:11.091109 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50232: no serving certificate available for the kubelet" Feb 24 09:32:11 crc kubenswrapper[4822]: I0224 09:32:11.864334 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50236: no serving certificate available for the kubelet" Feb 24 09:32:14 crc kubenswrapper[4822]: I0224 09:32:14.143261 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50248: no serving certificate available for the kubelet" Feb 24 09:32:14 crc kubenswrapper[4822]: I0224 09:32:14.923810 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50252: no serving certificate available for the kubelet" Feb 24 09:32:17 crc kubenswrapper[4822]: I0224 09:32:17.220855 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50268: no serving certificate available for the kubelet" Feb 24 09:32:17 crc kubenswrapper[4822]: I0224 09:32:17.989133 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50284: no serving certificate available for the kubelet" Feb 24 09:32:20 crc kubenswrapper[4822]: I0224 09:32:20.280010 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50298: no serving certificate available for the kubelet" Feb 24 09:32:21 crc kubenswrapper[4822]: I0224 09:32:21.048768 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50302: no serving certificate available for the kubelet" Feb 24 09:32:23 crc kubenswrapper[4822]: I0224 09:32:23.416356 4822 ???:1] "http: TLS handshake error from 192.168.126.11:56786: no serving certificate available for the kubelet" Feb 24 09:32:24 crc kubenswrapper[4822]: I0224 09:32:24.104983 4822 ???:1] "http: TLS handshake error from 192.168.126.11:56802: no serving certificate available for the kubelet" Feb 24 09:32:26 crc kubenswrapper[4822]: I0224 09:32:26.475390 4822 ???:1] "http: TLS handshake error from 192.168.126.11:56804: no serving certificate available for the kubelet" Feb 24 09:32:27 crc kubenswrapper[4822]: I0224 09:32:27.168369 4822 ???:1] "http: TLS handshake error from 192.168.126.11:56814: no serving certificate available for the kubelet" Feb 24 09:32:29 crc kubenswrapper[4822]: I0224 09:32:29.519015 4822 ???:1] "http: TLS handshake error from 192.168.126.11:56828: no serving certificate available for the kubelet" Feb 24 09:32:30 crc kubenswrapper[4822]: I0224 09:32:30.216608 4822 ???:1] "http: TLS handshake error from 192.168.126.11:56840: no serving certificate available for the kubelet" Feb 24 09:32:32 crc kubenswrapper[4822]: I0224 09:32:32.578002 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42712: no serving certificate available for the kubelet" Feb 24 09:32:33 crc kubenswrapper[4822]: I0224 09:32:33.277047 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42714: no serving certificate available for the kubelet" Feb 24 09:32:35 crc kubenswrapper[4822]: I0224 09:32:35.634467 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42718: no serving certificate available for the kubelet" Feb 24 09:32:36 crc kubenswrapper[4822]: I0224 09:32:36.336024 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42722: no serving certificate available for the kubelet" Feb 24 09:32:38 crc kubenswrapper[4822]: I0224 09:32:38.692518 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42728: no serving certificate available for the kubelet" Feb 24 09:32:39 crc kubenswrapper[4822]: I0224 09:32:39.384758 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42734: no serving certificate available for the kubelet" Feb 24 09:32:41 crc kubenswrapper[4822]: I0224 09:32:41.748059 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35480: no serving certificate available for the kubelet" Feb 24 09:32:42 crc kubenswrapper[4822]: I0224 09:32:42.439806 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35492: no serving certificate available for the kubelet" Feb 24 09:32:44 crc kubenswrapper[4822]: I0224 09:32:44.807256 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35506: no serving certificate available for the kubelet" Feb 24 09:32:45 crc kubenswrapper[4822]: I0224 09:32:45.491906 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35518: no serving certificate available for the kubelet" Feb 24 09:32:47 crc kubenswrapper[4822]: I0224 09:32:47.856122 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35532: no serving certificate available for the kubelet" Feb 24 09:32:48 crc kubenswrapper[4822]: I0224 09:32:48.549378 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35534: no serving certificate available for the kubelet" Feb 24 09:32:50 crc kubenswrapper[4822]: I0224 09:32:50.925080 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35542: no serving certificate available for the kubelet" Feb 24 09:32:51 crc kubenswrapper[4822]: I0224 09:32:51.607944 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49974: no serving certificate available for the kubelet" Feb 24 09:32:54 crc kubenswrapper[4822]: I0224 09:32:54.013748 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49988: no serving certificate available for the kubelet" Feb 24 09:32:54 crc kubenswrapper[4822]: I0224 09:32:54.665252 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49990: no serving certificate available for the kubelet" Feb 24 09:32:57 crc kubenswrapper[4822]: I0224 09:32:57.063519 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50000: no serving certificate available for the kubelet" Feb 24 09:32:57 crc kubenswrapper[4822]: I0224 09:32:57.730894 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50002: no serving certificate available for the kubelet" Feb 24 09:33:00 crc kubenswrapper[4822]: I0224 09:33:00.128613 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50016: no serving certificate available for the kubelet" Feb 24 09:33:00 crc kubenswrapper[4822]: I0224 09:33:00.793051 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50020: no serving certificate available for the kubelet" Feb 24 09:33:03 crc kubenswrapper[4822]: I0224 09:33:03.246092 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35112: no serving certificate available for the kubelet" Feb 24 09:33:03 crc kubenswrapper[4822]: I0224 09:33:03.899801 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35116: no serving certificate available for the kubelet" Feb 24 09:33:06 crc kubenswrapper[4822]: I0224 09:33:06.311605 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35132: no serving certificate available for the kubelet" Feb 24 09:33:06 crc kubenswrapper[4822]: I0224 09:33:06.960686 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35138: no serving certificate available for the kubelet" Feb 24 09:33:09 crc kubenswrapper[4822]: I0224 09:33:09.383880 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35148: no serving certificate available for the kubelet" Feb 24 09:33:10 crc kubenswrapper[4822]: I0224 09:33:10.020687 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35158: no serving certificate available for the kubelet" Feb 24 09:33:12 crc kubenswrapper[4822]: I0224 09:33:12.439024 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54886: no serving certificate available for the kubelet" Feb 24 09:33:13 crc kubenswrapper[4822]: I0224 09:33:13.079048 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54890: no serving certificate available for the kubelet" Feb 24 09:33:14 crc kubenswrapper[4822]: I0224 09:33:14.905276 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/openstack-cell1-galera-0" podUID="3ff049ae-9abb-4477-9f51-eee7228cedfd" containerName="galera" probeResult="failure" output=< Feb 24 09:33:14 crc kubenswrapper[4822]: waiting for gcomm URI Feb 24 09:33:14 crc kubenswrapper[4822]: > Feb 24 09:33:14 crc kubenswrapper[4822]: I0224 09:33:14.905717 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 24 09:33:14 crc kubenswrapper[4822]: I0224 09:33:14.906869 4822 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"7f5e57d03ffaa11b105be338eb80203fd926e436bc685e02891070b18b62f1f4"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed startup probe, will be restarted" Feb 24 09:33:14 crc kubenswrapper[4822]: I0224 09:33:14.992729 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="3ff049ae-9abb-4477-9f51-eee7228cedfd" containerName="galera" containerID="cri-o://7f5e57d03ffaa11b105be338eb80203fd926e436bc685e02891070b18b62f1f4" gracePeriod=30 Feb 24 09:33:15 crc kubenswrapper[4822]: E0224 09:33:15.112781 4822 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ff049ae_9abb_4477_9f51_eee7228cedfd.slice/crio-7f5e57d03ffaa11b105be338eb80203fd926e436bc685e02891070b18b62f1f4.scope\": RecentStats: unable to find data in memory cache]" Feb 24 09:33:15 crc kubenswrapper[4822]: I0224 09:33:15.479719 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54904: no serving certificate available for the kubelet" Feb 24 09:33:15 crc kubenswrapper[4822]: I0224 09:33:15.953472 4822 generic.go:334] "Generic (PLEG): container finished" podID="3ff049ae-9abb-4477-9f51-eee7228cedfd" containerID="7f5e57d03ffaa11b105be338eb80203fd926e436bc685e02891070b18b62f1f4" exitCode=143 Feb 24 09:33:15 crc kubenswrapper[4822]: I0224 09:33:15.953537 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3ff049ae-9abb-4477-9f51-eee7228cedfd","Type":"ContainerDied","Data":"7f5e57d03ffaa11b105be338eb80203fd926e436bc685e02891070b18b62f1f4"} Feb 24 09:33:15 crc kubenswrapper[4822]: I0224 09:33:15.953585 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3ff049ae-9abb-4477-9f51-eee7228cedfd","Type":"ContainerStarted","Data":"4c80c4166b319cb09d4c17d95095bc3b8c3e2970e502448adc74b74c1acaeabd"} Feb 24 09:33:15 crc kubenswrapper[4822]: I0224 09:33:15.953629 4822 scope.go:117] "RemoveContainer" containerID="33c66df3d646804c90bd8a401cc8cc3214693c58479f3e9432107f6727c5c8bc" Feb 24 09:33:16 crc kubenswrapper[4822]: I0224 09:33:16.118777 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54908: no serving certificate available for the kubelet" Feb 24 09:33:18 crc kubenswrapper[4822]: I0224 09:33:18.624001 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54922: no serving certificate available for the kubelet" Feb 24 09:33:19 crc kubenswrapper[4822]: I0224 09:33:19.175612 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54928: no serving certificate available for the kubelet" Feb 24 09:33:21 crc kubenswrapper[4822]: I0224 09:33:21.218168 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/openstack-galera-0" podUID="92cdd045-d60c-433d-b2e3-32f93299ee8e" containerName="galera" probeResult="failure" output=< Feb 24 09:33:21 crc kubenswrapper[4822]: waiting for gcomm URI Feb 24 09:33:21 crc kubenswrapper[4822]: > Feb 24 09:33:21 crc kubenswrapper[4822]: I0224 09:33:21.218662 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 24 09:33:21 crc kubenswrapper[4822]: I0224 09:33:21.219817 4822 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"715ebd280f2edaae16b304c9710c830e20238fa9e80562afe51d830a9ee205d9"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed startup probe, will be restarted" Feb 24 09:33:21 crc kubenswrapper[4822]: I0224 09:33:21.304304 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="92cdd045-d60c-433d-b2e3-32f93299ee8e" containerName="galera" containerID="cri-o://715ebd280f2edaae16b304c9710c830e20238fa9e80562afe51d830a9ee205d9" gracePeriod=30 Feb 24 09:33:21 crc kubenswrapper[4822]: I0224 09:33:21.685049 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49464: no serving certificate available for the kubelet" Feb 24 09:33:22 crc kubenswrapper[4822]: I0224 09:33:22.023484 4822 generic.go:334] "Generic (PLEG): container finished" podID="92cdd045-d60c-433d-b2e3-32f93299ee8e" containerID="715ebd280f2edaae16b304c9710c830e20238fa9e80562afe51d830a9ee205d9" exitCode=143 Feb 24 09:33:22 crc kubenswrapper[4822]: I0224 09:33:22.023549 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"92cdd045-d60c-433d-b2e3-32f93299ee8e","Type":"ContainerDied","Data":"715ebd280f2edaae16b304c9710c830e20238fa9e80562afe51d830a9ee205d9"} Feb 24 09:33:22 crc kubenswrapper[4822]: I0224 09:33:22.023591 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"92cdd045-d60c-433d-b2e3-32f93299ee8e","Type":"ContainerStarted","Data":"0a8786cf20731e3944c79e68631df8d4a02d00586465dfbca243dbb355db0059"} Feb 24 09:33:22 crc kubenswrapper[4822]: I0224 09:33:22.023621 4822 scope.go:117] "RemoveContainer" containerID="85b53d5a5f9684bb70bb78e089b925297df077dac3839b5245ea55f59723a5ea" Feb 24 09:33:22 crc kubenswrapper[4822]: I0224 09:33:22.235026 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49480: no serving certificate available for the kubelet" Feb 24 09:33:23 crc kubenswrapper[4822]: I0224 09:33:23.058209 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 24 09:33:23 crc kubenswrapper[4822]: I0224 09:33:23.058277 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 24 09:33:24 crc kubenswrapper[4822]: I0224 09:33:24.734791 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49490: no serving certificate available for the kubelet" Feb 24 09:33:25 crc kubenswrapper[4822]: I0224 09:33:25.291355 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49496: no serving certificate available for the kubelet" Feb 24 09:33:27 crc kubenswrapper[4822]: I0224 09:33:27.795254 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49498: no serving certificate available for the kubelet" Feb 24 09:33:28 crc kubenswrapper[4822]: I0224 09:33:28.360908 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49500: no serving certificate available for the kubelet" Feb 24 09:33:30 crc kubenswrapper[4822]: I0224 09:33:30.839243 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49504: no serving certificate available for the kubelet" Feb 24 09:33:31 crc kubenswrapper[4822]: I0224 09:33:31.408102 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53276: no serving certificate available for the kubelet" Feb 24 09:33:31 crc kubenswrapper[4822]: I0224 09:33:31.616613 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 24 09:33:31 crc kubenswrapper[4822]: I0224 09:33:31.616714 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 24 09:33:33 crc kubenswrapper[4822]: I0224 09:33:33.892176 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53292: no serving certificate available for the kubelet" Feb 24 09:33:34 crc kubenswrapper[4822]: I0224 09:33:34.468580 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53306: no serving certificate available for the kubelet" Feb 24 09:33:36 crc kubenswrapper[4822]: I0224 09:33:36.939066 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53314: no serving certificate available for the kubelet" Feb 24 09:33:37 crc kubenswrapper[4822]: I0224 09:33:37.692846 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53330: no serving certificate available for the kubelet" Feb 24 09:33:39 crc kubenswrapper[4822]: I0224 09:33:39.999198 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53336: no serving certificate available for the kubelet" Feb 24 09:33:40 crc kubenswrapper[4822]: I0224 09:33:40.745254 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53346: no serving certificate available for the kubelet" Feb 24 09:33:43 crc kubenswrapper[4822]: I0224 09:33:43.055453 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52756: no serving certificate available for the kubelet" Feb 24 09:33:43 crc kubenswrapper[4822]: I0224 09:33:43.801300 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52770: no serving certificate available for the kubelet" Feb 24 09:33:46 crc kubenswrapper[4822]: I0224 09:33:46.125145 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52772: no serving certificate available for the kubelet" Feb 24 09:33:46 crc kubenswrapper[4822]: I0224 09:33:46.845880 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52774: no serving certificate available for the kubelet" Feb 24 09:33:49 crc kubenswrapper[4822]: I0224 09:33:49.178388 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52776: no serving certificate available for the kubelet" Feb 24 09:33:49 crc kubenswrapper[4822]: I0224 09:33:49.899049 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52790: no serving certificate available for the kubelet" Feb 24 09:33:52 crc kubenswrapper[4822]: I0224 09:33:52.235445 4822 ???:1] "http: TLS handshake error from 192.168.126.11:41154: no serving certificate available for the kubelet" Feb 24 09:33:52 crc kubenswrapper[4822]: I0224 09:33:52.951710 4822 ???:1] "http: TLS handshake error from 192.168.126.11:41166: no serving certificate available for the kubelet" Feb 24 09:33:55 crc kubenswrapper[4822]: I0224 09:33:55.307042 4822 ???:1] "http: TLS handshake error from 192.168.126.11:41174: no serving certificate available for the kubelet" Feb 24 09:33:56 crc kubenswrapper[4822]: I0224 09:33:56.015045 4822 ???:1] "http: TLS handshake error from 192.168.126.11:41190: no serving certificate available for the kubelet" Feb 24 09:33:58 crc kubenswrapper[4822]: I0224 09:33:58.364202 4822 ???:1] "http: TLS handshake error from 192.168.126.11:41204: no serving certificate available for the kubelet" Feb 24 09:33:59 crc kubenswrapper[4822]: I0224 09:33:59.076901 4822 ???:1] "http: TLS handshake error from 192.168.126.11:41218: no serving certificate available for the kubelet" Feb 24 09:34:01 crc kubenswrapper[4822]: I0224 09:34:01.407858 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33984: no serving certificate available for the kubelet" Feb 24 09:34:02 crc kubenswrapper[4822]: I0224 09:34:02.128870 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33988: no serving certificate available for the kubelet" Feb 24 09:34:04 crc kubenswrapper[4822]: I0224 09:34:04.460618 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33992: no serving certificate available for the kubelet" Feb 24 09:34:05 crc kubenswrapper[4822]: I0224 09:34:05.189426 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34006: no serving certificate available for the kubelet" Feb 24 09:34:07 crc kubenswrapper[4822]: I0224 09:34:07.517646 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34020: no serving certificate available for the kubelet" Feb 24 09:34:08 crc kubenswrapper[4822]: I0224 09:34:08.230932 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34022: no serving certificate available for the kubelet" Feb 24 09:34:10 crc kubenswrapper[4822]: I0224 09:34:10.574388 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34030: no serving certificate available for the kubelet" Feb 24 09:34:11 crc kubenswrapper[4822]: I0224 09:34:11.288669 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47294: no serving certificate available for the kubelet" Feb 24 09:34:13 crc kubenswrapper[4822]: I0224 09:34:13.629673 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47304: no serving certificate available for the kubelet" Feb 24 09:34:14 crc kubenswrapper[4822]: I0224 09:34:14.410450 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47306: no serving certificate available for the kubelet" Feb 24 09:34:15 crc kubenswrapper[4822]: I0224 09:34:15.677052 4822 patch_prober.go:28] interesting pod/machine-config-daemon-qd752 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:34:15 crc kubenswrapper[4822]: I0224 09:34:15.677500 4822 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:34:16 crc kubenswrapper[4822]: I0224 09:34:16.675188 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47312: no serving certificate available for the kubelet" Feb 24 09:34:17 crc kubenswrapper[4822]: I0224 09:34:17.449786 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47316: no serving certificate available for the kubelet" Feb 24 09:34:19 crc kubenswrapper[4822]: I0224 09:34:19.715286 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47324: no serving certificate available for the kubelet" Feb 24 09:34:20 crc kubenswrapper[4822]: I0224 09:34:20.514719 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47332: no serving certificate available for the kubelet" Feb 24 09:34:22 crc kubenswrapper[4822]: I0224 09:34:22.779413 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49458: no serving certificate available for the kubelet" Feb 24 09:34:23 crc kubenswrapper[4822]: I0224 09:34:23.572240 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49462: no serving certificate available for the kubelet" Feb 24 09:34:25 crc kubenswrapper[4822]: I0224 09:34:25.844085 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49476: no serving certificate available for the kubelet" Feb 24 09:34:26 crc kubenswrapper[4822]: I0224 09:34:26.641029 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49484: no serving certificate available for the kubelet" Feb 24 09:34:28 crc kubenswrapper[4822]: I0224 09:34:28.902976 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49500: no serving certificate available for the kubelet" Feb 24 09:34:29 crc kubenswrapper[4822]: I0224 09:34:29.708414 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49516: no serving certificate available for the kubelet" Feb 24 09:34:31 crc kubenswrapper[4822]: I0224 09:34:31.959597 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37610: no serving certificate available for the kubelet" Feb 24 09:34:32 crc kubenswrapper[4822]: I0224 09:34:32.778982 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37618: no serving certificate available for the kubelet" Feb 24 09:34:34 crc kubenswrapper[4822]: I0224 09:34:34.998767 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37620: no serving certificate available for the kubelet" Feb 24 09:34:35 crc kubenswrapper[4822]: I0224 09:34:35.830047 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37622: no serving certificate available for the kubelet" Feb 24 09:34:38 crc kubenswrapper[4822]: I0224 09:34:38.427740 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37636: no serving certificate available for the kubelet" Feb 24 09:34:38 crc kubenswrapper[4822]: I0224 09:34:38.892832 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37648: no serving certificate available for the kubelet" Feb 24 09:34:40 crc kubenswrapper[4822]: I0224 09:34:40.620161 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mm684"] Feb 24 09:34:40 crc kubenswrapper[4822]: E0224 09:34:40.623638 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d4e1118-756b-4a31-9e15-6693440650f5" containerName="collect-profiles" Feb 24 09:34:40 crc kubenswrapper[4822]: I0224 09:34:40.624212 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d4e1118-756b-4a31-9e15-6693440650f5" containerName="collect-profiles" Feb 24 09:34:40 crc kubenswrapper[4822]: I0224 09:34:40.624690 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d4e1118-756b-4a31-9e15-6693440650f5" containerName="collect-profiles" Feb 24 09:34:40 crc kubenswrapper[4822]: I0224 09:34:40.627299 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mm684" Feb 24 09:34:40 crc kubenswrapper[4822]: I0224 09:34:40.651869 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mm684"] Feb 24 09:34:40 crc kubenswrapper[4822]: I0224 09:34:40.705902 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhzxt\" (UniqueName: \"kubernetes.io/projected/c3a6e320-6aea-4c8a-bb58-a791609b2001-kube-api-access-bhzxt\") pod \"redhat-marketplace-mm684\" (UID: \"c3a6e320-6aea-4c8a-bb58-a791609b2001\") " pod="openshift-marketplace/redhat-marketplace-mm684" Feb 24 09:34:40 crc kubenswrapper[4822]: I0224 09:34:40.706114 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3a6e320-6aea-4c8a-bb58-a791609b2001-catalog-content\") pod \"redhat-marketplace-mm684\" (UID: \"c3a6e320-6aea-4c8a-bb58-a791609b2001\") " pod="openshift-marketplace/redhat-marketplace-mm684" Feb 24 09:34:40 crc kubenswrapper[4822]: I0224 09:34:40.706164 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3a6e320-6aea-4c8a-bb58-a791609b2001-utilities\") pod \"redhat-marketplace-mm684\" (UID: \"c3a6e320-6aea-4c8a-bb58-a791609b2001\") " pod="openshift-marketplace/redhat-marketplace-mm684" Feb 24 09:34:40 crc kubenswrapper[4822]: I0224 09:34:40.807308 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3a6e320-6aea-4c8a-bb58-a791609b2001-catalog-content\") pod \"redhat-marketplace-mm684\" (UID: \"c3a6e320-6aea-4c8a-bb58-a791609b2001\") " pod="openshift-marketplace/redhat-marketplace-mm684" Feb 24 09:34:40 crc kubenswrapper[4822]: I0224 09:34:40.807384 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3a6e320-6aea-4c8a-bb58-a791609b2001-utilities\") pod \"redhat-marketplace-mm684\" (UID: \"c3a6e320-6aea-4c8a-bb58-a791609b2001\") " pod="openshift-marketplace/redhat-marketplace-mm684" Feb 24 09:34:40 crc kubenswrapper[4822]: I0224 09:34:40.807449 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhzxt\" (UniqueName: \"kubernetes.io/projected/c3a6e320-6aea-4c8a-bb58-a791609b2001-kube-api-access-bhzxt\") pod \"redhat-marketplace-mm684\" (UID: \"c3a6e320-6aea-4c8a-bb58-a791609b2001\") " pod="openshift-marketplace/redhat-marketplace-mm684" Feb 24 09:34:40 crc kubenswrapper[4822]: I0224 09:34:40.808311 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3a6e320-6aea-4c8a-bb58-a791609b2001-utilities\") pod \"redhat-marketplace-mm684\" (UID: \"c3a6e320-6aea-4c8a-bb58-a791609b2001\") " pod="openshift-marketplace/redhat-marketplace-mm684" Feb 24 09:34:40 crc kubenswrapper[4822]: I0224 09:34:40.808662 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3a6e320-6aea-4c8a-bb58-a791609b2001-catalog-content\") pod \"redhat-marketplace-mm684\" (UID: \"c3a6e320-6aea-4c8a-bb58-a791609b2001\") " pod="openshift-marketplace/redhat-marketplace-mm684" Feb 24 09:34:40 crc kubenswrapper[4822]: I0224 09:34:40.845799 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhzxt\" (UniqueName: \"kubernetes.io/projected/c3a6e320-6aea-4c8a-bb58-a791609b2001-kube-api-access-bhzxt\") pod \"redhat-marketplace-mm684\" (UID: \"c3a6e320-6aea-4c8a-bb58-a791609b2001\") " pod="openshift-marketplace/redhat-marketplace-mm684" Feb 24 09:34:40 crc kubenswrapper[4822]: I0224 09:34:40.951906 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mm684" Feb 24 09:34:41 crc kubenswrapper[4822]: I0224 09:34:41.451493 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mm684"] Feb 24 09:34:41 crc kubenswrapper[4822]: I0224 09:34:41.499324 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35266: no serving certificate available for the kubelet" Feb 24 09:34:41 crc kubenswrapper[4822]: I0224 09:34:41.947775 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35272: no serving certificate available for the kubelet" Feb 24 09:34:42 crc kubenswrapper[4822]: I0224 09:34:42.414754 4822 generic.go:334] "Generic (PLEG): container finished" podID="c3a6e320-6aea-4c8a-bb58-a791609b2001" containerID="58c289c9829c71de07fe436a6417f6575f8d30a1a7663f99ce3bcf2de9c17095" exitCode=0 Feb 24 09:34:42 crc kubenswrapper[4822]: I0224 09:34:42.414822 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mm684" event={"ID":"c3a6e320-6aea-4c8a-bb58-a791609b2001","Type":"ContainerDied","Data":"58c289c9829c71de07fe436a6417f6575f8d30a1a7663f99ce3bcf2de9c17095"} Feb 24 09:34:42 crc kubenswrapper[4822]: I0224 09:34:42.414861 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mm684" event={"ID":"c3a6e320-6aea-4c8a-bb58-a791609b2001","Type":"ContainerStarted","Data":"2abc8dce0bdd12118d2b5c6f17bcd3221e8d9001319e518d9683f5f71b32aa6d"} Feb 24 09:34:42 crc kubenswrapper[4822]: I0224 09:34:42.420780 4822 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 09:34:44 crc kubenswrapper[4822]: I0224 09:34:44.439246 4822 generic.go:334] "Generic (PLEG): container finished" podID="c3a6e320-6aea-4c8a-bb58-a791609b2001" containerID="019df83a1bd3d8716e32e23af1ccf913281fb987a0c3cce606023b79c74bdbef" exitCode=0 Feb 24 09:34:44 crc kubenswrapper[4822]: I0224 09:34:44.439545 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mm684" event={"ID":"c3a6e320-6aea-4c8a-bb58-a791609b2001","Type":"ContainerDied","Data":"019df83a1bd3d8716e32e23af1ccf913281fb987a0c3cce606023b79c74bdbef"} Feb 24 09:34:44 crc kubenswrapper[4822]: I0224 09:34:44.547994 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35278: no serving certificate available for the kubelet" Feb 24 09:34:45 crc kubenswrapper[4822]: I0224 09:34:45.019777 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35292: no serving certificate available for the kubelet" Feb 24 09:34:45 crc kubenswrapper[4822]: I0224 09:34:45.453441 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mm684" event={"ID":"c3a6e320-6aea-4c8a-bb58-a791609b2001","Type":"ContainerStarted","Data":"154ec6d3db53485c1e92bc01474cfcdca093434ef380e831f2b8a8dae72b30ab"} Feb 24 09:34:45 crc kubenswrapper[4822]: I0224 09:34:45.485984 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mm684" podStartSLOduration=3.045048215 podStartE2EDuration="5.48596524s" podCreationTimestamp="2026-02-24 09:34:40 +0000 UTC" firstStartedPulling="2026-02-24 09:34:42.420367669 +0000 UTC m=+1604.808130257" lastFinishedPulling="2026-02-24 09:34:44.861284694 +0000 UTC m=+1607.249047282" observedRunningTime="2026-02-24 09:34:45.480211527 +0000 UTC m=+1607.867974115" watchObservedRunningTime="2026-02-24 09:34:45.48596524 +0000 UTC m=+1607.873727798" Feb 24 09:34:45 crc kubenswrapper[4822]: I0224 09:34:45.676829 4822 patch_prober.go:28] interesting pod/machine-config-daemon-qd752 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:34:45 crc kubenswrapper[4822]: I0224 09:34:45.676906 4822 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:34:47 crc kubenswrapper[4822]: I0224 09:34:47.599559 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35294: no serving certificate available for the kubelet" Feb 24 09:34:48 crc kubenswrapper[4822]: I0224 09:34:48.069092 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35306: no serving certificate available for the kubelet" Feb 24 09:34:50 crc kubenswrapper[4822]: I0224 09:34:50.658685 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35316: no serving certificate available for the kubelet" Feb 24 09:34:50 crc kubenswrapper[4822]: I0224 09:34:50.952241 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mm684" Feb 24 09:34:50 crc kubenswrapper[4822]: I0224 09:34:50.952333 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mm684" Feb 24 09:34:51 crc kubenswrapper[4822]: I0224 09:34:51.035672 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mm684" Feb 24 09:34:51 crc kubenswrapper[4822]: I0224 09:34:51.125680 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58608: no serving certificate available for the kubelet" Feb 24 09:34:51 crc kubenswrapper[4822]: I0224 09:34:51.608855 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mm684" Feb 24 09:34:51 crc kubenswrapper[4822]: I0224 09:34:51.660860 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mm684"] Feb 24 09:34:53 crc kubenswrapper[4822]: I0224 09:34:53.535277 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mm684" podUID="c3a6e320-6aea-4c8a-bb58-a791609b2001" containerName="registry-server" containerID="cri-o://154ec6d3db53485c1e92bc01474cfcdca093434ef380e831f2b8a8dae72b30ab" gracePeriod=2 Feb 24 09:34:53 crc kubenswrapper[4822]: I0224 09:34:53.721694 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58612: no serving certificate available for the kubelet" Feb 24 09:34:54 crc kubenswrapper[4822]: I0224 09:34:54.074883 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mm684" Feb 24 09:34:54 crc kubenswrapper[4822]: I0224 09:34:54.139493 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3a6e320-6aea-4c8a-bb58-a791609b2001-utilities\") pod \"c3a6e320-6aea-4c8a-bb58-a791609b2001\" (UID: \"c3a6e320-6aea-4c8a-bb58-a791609b2001\") " Feb 24 09:34:54 crc kubenswrapper[4822]: I0224 09:34:54.139671 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhzxt\" (UniqueName: \"kubernetes.io/projected/c3a6e320-6aea-4c8a-bb58-a791609b2001-kube-api-access-bhzxt\") pod \"c3a6e320-6aea-4c8a-bb58-a791609b2001\" (UID: \"c3a6e320-6aea-4c8a-bb58-a791609b2001\") " Feb 24 09:34:54 crc kubenswrapper[4822]: I0224 09:34:54.139737 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3a6e320-6aea-4c8a-bb58-a791609b2001-catalog-content\") pod \"c3a6e320-6aea-4c8a-bb58-a791609b2001\" (UID: \"c3a6e320-6aea-4c8a-bb58-a791609b2001\") " Feb 24 09:34:54 crc kubenswrapper[4822]: I0224 09:34:54.142604 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3a6e320-6aea-4c8a-bb58-a791609b2001-utilities" (OuterVolumeSpecName: "utilities") pod "c3a6e320-6aea-4c8a-bb58-a791609b2001" (UID: "c3a6e320-6aea-4c8a-bb58-a791609b2001"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:34:54 crc kubenswrapper[4822]: I0224 09:34:54.147132 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3a6e320-6aea-4c8a-bb58-a791609b2001-kube-api-access-bhzxt" (OuterVolumeSpecName: "kube-api-access-bhzxt") pod "c3a6e320-6aea-4c8a-bb58-a791609b2001" (UID: "c3a6e320-6aea-4c8a-bb58-a791609b2001"). InnerVolumeSpecName "kube-api-access-bhzxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:34:54 crc kubenswrapper[4822]: I0224 09:34:54.173339 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3a6e320-6aea-4c8a-bb58-a791609b2001-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c3a6e320-6aea-4c8a-bb58-a791609b2001" (UID: "c3a6e320-6aea-4c8a-bb58-a791609b2001"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:34:54 crc kubenswrapper[4822]: I0224 09:34:54.176343 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58616: no serving certificate available for the kubelet" Feb 24 09:34:54 crc kubenswrapper[4822]: I0224 09:34:54.242011 4822 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3a6e320-6aea-4c8a-bb58-a791609b2001-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:34:54 crc kubenswrapper[4822]: I0224 09:34:54.242074 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhzxt\" (UniqueName: \"kubernetes.io/projected/c3a6e320-6aea-4c8a-bb58-a791609b2001-kube-api-access-bhzxt\") on node \"crc\" DevicePath \"\"" Feb 24 09:34:54 crc kubenswrapper[4822]: I0224 09:34:54.242096 4822 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3a6e320-6aea-4c8a-bb58-a791609b2001-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:34:54 crc kubenswrapper[4822]: I0224 09:34:54.546119 4822 generic.go:334] "Generic (PLEG): container finished" podID="c3a6e320-6aea-4c8a-bb58-a791609b2001" containerID="154ec6d3db53485c1e92bc01474cfcdca093434ef380e831f2b8a8dae72b30ab" exitCode=0 Feb 24 09:34:54 crc kubenswrapper[4822]: I0224 09:34:54.546166 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mm684" event={"ID":"c3a6e320-6aea-4c8a-bb58-a791609b2001","Type":"ContainerDied","Data":"154ec6d3db53485c1e92bc01474cfcdca093434ef380e831f2b8a8dae72b30ab"} Feb 24 09:34:54 crc kubenswrapper[4822]: I0224 09:34:54.546196 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mm684" event={"ID":"c3a6e320-6aea-4c8a-bb58-a791609b2001","Type":"ContainerDied","Data":"2abc8dce0bdd12118d2b5c6f17bcd3221e8d9001319e518d9683f5f71b32aa6d"} Feb 24 09:34:54 crc kubenswrapper[4822]: I0224 09:34:54.546213 4822 scope.go:117] "RemoveContainer" containerID="154ec6d3db53485c1e92bc01474cfcdca093434ef380e831f2b8a8dae72b30ab" Feb 24 09:34:54 crc kubenswrapper[4822]: I0224 09:34:54.546304 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mm684" Feb 24 09:34:54 crc kubenswrapper[4822]: I0224 09:34:54.586344 4822 scope.go:117] "RemoveContainer" containerID="019df83a1bd3d8716e32e23af1ccf913281fb987a0c3cce606023b79c74bdbef" Feb 24 09:34:54 crc kubenswrapper[4822]: I0224 09:34:54.607718 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mm684"] Feb 24 09:34:54 crc kubenswrapper[4822]: I0224 09:34:54.612163 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mm684"] Feb 24 09:34:54 crc kubenswrapper[4822]: I0224 09:34:54.619198 4822 scope.go:117] "RemoveContainer" containerID="58c289c9829c71de07fe436a6417f6575f8d30a1a7663f99ce3bcf2de9c17095" Feb 24 09:34:54 crc kubenswrapper[4822]: I0224 09:34:54.654197 4822 scope.go:117] "RemoveContainer" containerID="154ec6d3db53485c1e92bc01474cfcdca093434ef380e831f2b8a8dae72b30ab" Feb 24 09:34:54 crc kubenswrapper[4822]: E0224 09:34:54.654797 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"154ec6d3db53485c1e92bc01474cfcdca093434ef380e831f2b8a8dae72b30ab\": container with ID starting with 154ec6d3db53485c1e92bc01474cfcdca093434ef380e831f2b8a8dae72b30ab not found: ID does not exist" containerID="154ec6d3db53485c1e92bc01474cfcdca093434ef380e831f2b8a8dae72b30ab" Feb 24 09:34:54 crc kubenswrapper[4822]: I0224 09:34:54.654865 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"154ec6d3db53485c1e92bc01474cfcdca093434ef380e831f2b8a8dae72b30ab"} err="failed to get container status \"154ec6d3db53485c1e92bc01474cfcdca093434ef380e831f2b8a8dae72b30ab\": rpc error: code = NotFound desc = could not find container \"154ec6d3db53485c1e92bc01474cfcdca093434ef380e831f2b8a8dae72b30ab\": container with ID starting with 154ec6d3db53485c1e92bc01474cfcdca093434ef380e831f2b8a8dae72b30ab not found: ID does not exist" Feb 24 09:34:54 crc kubenswrapper[4822]: I0224 09:34:54.654898 4822 scope.go:117] "RemoveContainer" containerID="019df83a1bd3d8716e32e23af1ccf913281fb987a0c3cce606023b79c74bdbef" Feb 24 09:34:54 crc kubenswrapper[4822]: E0224 09:34:54.655364 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"019df83a1bd3d8716e32e23af1ccf913281fb987a0c3cce606023b79c74bdbef\": container with ID starting with 019df83a1bd3d8716e32e23af1ccf913281fb987a0c3cce606023b79c74bdbef not found: ID does not exist" containerID="019df83a1bd3d8716e32e23af1ccf913281fb987a0c3cce606023b79c74bdbef" Feb 24 09:34:54 crc kubenswrapper[4822]: I0224 09:34:54.655412 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"019df83a1bd3d8716e32e23af1ccf913281fb987a0c3cce606023b79c74bdbef"} err="failed to get container status \"019df83a1bd3d8716e32e23af1ccf913281fb987a0c3cce606023b79c74bdbef\": rpc error: code = NotFound desc = could not find container \"019df83a1bd3d8716e32e23af1ccf913281fb987a0c3cce606023b79c74bdbef\": container with ID starting with 019df83a1bd3d8716e32e23af1ccf913281fb987a0c3cce606023b79c74bdbef not found: ID does not exist" Feb 24 09:34:54 crc kubenswrapper[4822]: I0224 09:34:54.655443 4822 scope.go:117] "RemoveContainer" containerID="58c289c9829c71de07fe436a6417f6575f8d30a1a7663f99ce3bcf2de9c17095" Feb 24 09:34:54 crc kubenswrapper[4822]: E0224 09:34:54.656153 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58c289c9829c71de07fe436a6417f6575f8d30a1a7663f99ce3bcf2de9c17095\": container with ID starting with 58c289c9829c71de07fe436a6417f6575f8d30a1a7663f99ce3bcf2de9c17095 not found: ID does not exist" containerID="58c289c9829c71de07fe436a6417f6575f8d30a1a7663f99ce3bcf2de9c17095" Feb 24 09:34:54 crc kubenswrapper[4822]: I0224 09:34:54.656225 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58c289c9829c71de07fe436a6417f6575f8d30a1a7663f99ce3bcf2de9c17095"} err="failed to get container status \"58c289c9829c71de07fe436a6417f6575f8d30a1a7663f99ce3bcf2de9c17095\": rpc error: code = NotFound desc = could not find container \"58c289c9829c71de07fe436a6417f6575f8d30a1a7663f99ce3bcf2de9c17095\": container with ID starting with 58c289c9829c71de07fe436a6417f6575f8d30a1a7663f99ce3bcf2de9c17095 not found: ID does not exist" Feb 24 09:34:56 crc kubenswrapper[4822]: I0224 09:34:56.355184 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3a6e320-6aea-4c8a-bb58-a791609b2001" path="/var/lib/kubelet/pods/c3a6e320-6aea-4c8a-bb58-a791609b2001/volumes" Feb 24 09:34:56 crc kubenswrapper[4822]: I0224 09:34:56.783625 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58624: no serving certificate available for the kubelet" Feb 24 09:34:57 crc kubenswrapper[4822]: I0224 09:34:57.214212 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58640: no serving certificate available for the kubelet" Feb 24 09:34:58 crc kubenswrapper[4822]: I0224 09:34:58.997144 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9l6vr"] Feb 24 09:34:58 crc kubenswrapper[4822]: E0224 09:34:58.997727 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3a6e320-6aea-4c8a-bb58-a791609b2001" containerName="extract-content" Feb 24 09:34:58 crc kubenswrapper[4822]: I0224 09:34:58.997744 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3a6e320-6aea-4c8a-bb58-a791609b2001" containerName="extract-content" Feb 24 09:34:58 crc kubenswrapper[4822]: E0224 09:34:58.997760 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3a6e320-6aea-4c8a-bb58-a791609b2001" containerName="registry-server" Feb 24 09:34:58 crc kubenswrapper[4822]: I0224 09:34:58.997779 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3a6e320-6aea-4c8a-bb58-a791609b2001" containerName="registry-server" Feb 24 09:34:58 crc kubenswrapper[4822]: E0224 09:34:58.997808 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3a6e320-6aea-4c8a-bb58-a791609b2001" containerName="extract-utilities" Feb 24 09:34:58 crc kubenswrapper[4822]: I0224 09:34:58.997816 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3a6e320-6aea-4c8a-bb58-a791609b2001" containerName="extract-utilities" Feb 24 09:34:58 crc kubenswrapper[4822]: I0224 09:34:58.998017 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3a6e320-6aea-4c8a-bb58-a791609b2001" containerName="registry-server" Feb 24 09:34:58 crc kubenswrapper[4822]: I0224 09:34:58.999280 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9l6vr" Feb 24 09:34:59 crc kubenswrapper[4822]: I0224 09:34:59.025945 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9l6vr"] Feb 24 09:34:59 crc kubenswrapper[4822]: I0224 09:34:59.033430 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44jvx\" (UniqueName: \"kubernetes.io/projected/c64cbb19-28f8-40e0-8a8f-c45a4a7cea64-kube-api-access-44jvx\") pod \"community-operators-9l6vr\" (UID: \"c64cbb19-28f8-40e0-8a8f-c45a4a7cea64\") " pod="openshift-marketplace/community-operators-9l6vr" Feb 24 09:34:59 crc kubenswrapper[4822]: I0224 09:34:59.033639 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c64cbb19-28f8-40e0-8a8f-c45a4a7cea64-utilities\") pod \"community-operators-9l6vr\" (UID: \"c64cbb19-28f8-40e0-8a8f-c45a4a7cea64\") " pod="openshift-marketplace/community-operators-9l6vr" Feb 24 09:34:59 crc kubenswrapper[4822]: I0224 09:34:59.033945 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c64cbb19-28f8-40e0-8a8f-c45a4a7cea64-catalog-content\") pod \"community-operators-9l6vr\" (UID: \"c64cbb19-28f8-40e0-8a8f-c45a4a7cea64\") " pod="openshift-marketplace/community-operators-9l6vr" Feb 24 09:34:59 crc kubenswrapper[4822]: I0224 09:34:59.135675 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44jvx\" (UniqueName: \"kubernetes.io/projected/c64cbb19-28f8-40e0-8a8f-c45a4a7cea64-kube-api-access-44jvx\") pod \"community-operators-9l6vr\" (UID: \"c64cbb19-28f8-40e0-8a8f-c45a4a7cea64\") " pod="openshift-marketplace/community-operators-9l6vr" Feb 24 09:34:59 crc kubenswrapper[4822]: I0224 09:34:59.135765 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c64cbb19-28f8-40e0-8a8f-c45a4a7cea64-utilities\") pod \"community-operators-9l6vr\" (UID: \"c64cbb19-28f8-40e0-8a8f-c45a4a7cea64\") " pod="openshift-marketplace/community-operators-9l6vr" Feb 24 09:34:59 crc kubenswrapper[4822]: I0224 09:34:59.135843 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c64cbb19-28f8-40e0-8a8f-c45a4a7cea64-catalog-content\") pod \"community-operators-9l6vr\" (UID: \"c64cbb19-28f8-40e0-8a8f-c45a4a7cea64\") " pod="openshift-marketplace/community-operators-9l6vr" Feb 24 09:34:59 crc kubenswrapper[4822]: I0224 09:34:59.136335 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c64cbb19-28f8-40e0-8a8f-c45a4a7cea64-catalog-content\") pod \"community-operators-9l6vr\" (UID: \"c64cbb19-28f8-40e0-8a8f-c45a4a7cea64\") " pod="openshift-marketplace/community-operators-9l6vr" Feb 24 09:34:59 crc kubenswrapper[4822]: I0224 09:34:59.136616 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c64cbb19-28f8-40e0-8a8f-c45a4a7cea64-utilities\") pod \"community-operators-9l6vr\" (UID: \"c64cbb19-28f8-40e0-8a8f-c45a4a7cea64\") " pod="openshift-marketplace/community-operators-9l6vr" Feb 24 09:34:59 crc kubenswrapper[4822]: I0224 09:34:59.152604 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44jvx\" (UniqueName: \"kubernetes.io/projected/c64cbb19-28f8-40e0-8a8f-c45a4a7cea64-kube-api-access-44jvx\") pod \"community-operators-9l6vr\" (UID: \"c64cbb19-28f8-40e0-8a8f-c45a4a7cea64\") " pod="openshift-marketplace/community-operators-9l6vr" Feb 24 09:34:59 crc kubenswrapper[4822]: I0224 09:34:59.323788 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9l6vr" Feb 24 09:34:59 crc kubenswrapper[4822]: I0224 09:34:59.819026 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9l6vr"] Feb 24 09:34:59 crc kubenswrapper[4822]: I0224 09:34:59.841374 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58650: no serving certificate available for the kubelet" Feb 24 09:35:00 crc kubenswrapper[4822]: I0224 09:35:00.265678 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58654: no serving certificate available for the kubelet" Feb 24 09:35:00 crc kubenswrapper[4822]: I0224 09:35:00.628435 4822 generic.go:334] "Generic (PLEG): container finished" podID="c64cbb19-28f8-40e0-8a8f-c45a4a7cea64" containerID="ff6a56925f45d738c375c0a547df94a633df61044d8b57191eecc8f3b9577ce7" exitCode=0 Feb 24 09:35:00 crc kubenswrapper[4822]: I0224 09:35:00.628511 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9l6vr" event={"ID":"c64cbb19-28f8-40e0-8a8f-c45a4a7cea64","Type":"ContainerDied","Data":"ff6a56925f45d738c375c0a547df94a633df61044d8b57191eecc8f3b9577ce7"} Feb 24 09:35:00 crc kubenswrapper[4822]: I0224 09:35:00.628585 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9l6vr" event={"ID":"c64cbb19-28f8-40e0-8a8f-c45a4a7cea64","Type":"ContainerStarted","Data":"8e3d77b969de0dd72fdc2be912d46db0a79c5d3810dae95be0687e56355c9dfb"} Feb 24 09:35:01 crc kubenswrapper[4822]: I0224 09:35:01.636248 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9l6vr" event={"ID":"c64cbb19-28f8-40e0-8a8f-c45a4a7cea64","Type":"ContainerStarted","Data":"3a541a7adcee5f932bf3ae1b8acffd67296e4c91d932f2b7392b5882a66929b7"} Feb 24 09:35:02 crc kubenswrapper[4822]: I0224 09:35:02.649656 4822 generic.go:334] "Generic (PLEG): container finished" podID="c64cbb19-28f8-40e0-8a8f-c45a4a7cea64" containerID="3a541a7adcee5f932bf3ae1b8acffd67296e4c91d932f2b7392b5882a66929b7" exitCode=0 Feb 24 09:35:02 crc kubenswrapper[4822]: I0224 09:35:02.650040 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9l6vr" event={"ID":"c64cbb19-28f8-40e0-8a8f-c45a4a7cea64","Type":"ContainerDied","Data":"3a541a7adcee5f932bf3ae1b8acffd67296e4c91d932f2b7392b5882a66929b7"} Feb 24 09:35:02 crc kubenswrapper[4822]: I0224 09:35:02.893051 4822 ???:1] "http: TLS handshake error from 192.168.126.11:41164: no serving certificate available for the kubelet" Feb 24 09:35:03 crc kubenswrapper[4822]: I0224 09:35:03.309857 4822 ???:1] "http: TLS handshake error from 192.168.126.11:41174: no serving certificate available for the kubelet" Feb 24 09:35:03 crc kubenswrapper[4822]: I0224 09:35:03.658690 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9l6vr" event={"ID":"c64cbb19-28f8-40e0-8a8f-c45a4a7cea64","Type":"ContainerStarted","Data":"d0bc672cf92c1e009cfa1fcf59565ae7fc1e89f6d6a8dfee626d3934bca2bcc6"} Feb 24 09:35:03 crc kubenswrapper[4822]: I0224 09:35:03.690991 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9l6vr" podStartSLOduration=3.206145811 podStartE2EDuration="5.690891702s" podCreationTimestamp="2026-02-24 09:34:58 +0000 UTC" firstStartedPulling="2026-02-24 09:35:00.631362892 +0000 UTC m=+1623.019125470" lastFinishedPulling="2026-02-24 09:35:03.116108793 +0000 UTC m=+1625.503871361" observedRunningTime="2026-02-24 09:35:03.684748009 +0000 UTC m=+1626.072510567" watchObservedRunningTime="2026-02-24 09:35:03.690891702 +0000 UTC m=+1626.078654280" Feb 24 09:35:05 crc kubenswrapper[4822]: I0224 09:35:05.933081 4822 ???:1] "http: TLS handshake error from 192.168.126.11:41182: no serving certificate available for the kubelet" Feb 24 09:35:06 crc kubenswrapper[4822]: I0224 09:35:06.349375 4822 ???:1] "http: TLS handshake error from 192.168.126.11:41198: no serving certificate available for the kubelet" Feb 24 09:35:06 crc kubenswrapper[4822]: I0224 09:35:06.366698 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f84pq"] Feb 24 09:35:06 crc kubenswrapper[4822]: I0224 09:35:06.369722 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f84pq" Feb 24 09:35:06 crc kubenswrapper[4822]: I0224 09:35:06.376715 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f84pq"] Feb 24 09:35:06 crc kubenswrapper[4822]: I0224 09:35:06.470176 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21acd4cb-8a1e-40ec-b59a-83a8dd00fd91-utilities\") pod \"certified-operators-f84pq\" (UID: \"21acd4cb-8a1e-40ec-b59a-83a8dd00fd91\") " pod="openshift-marketplace/certified-operators-f84pq" Feb 24 09:35:06 crc kubenswrapper[4822]: I0224 09:35:06.470225 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21acd4cb-8a1e-40ec-b59a-83a8dd00fd91-catalog-content\") pod \"certified-operators-f84pq\" (UID: \"21acd4cb-8a1e-40ec-b59a-83a8dd00fd91\") " pod="openshift-marketplace/certified-operators-f84pq" Feb 24 09:35:06 crc kubenswrapper[4822]: I0224 09:35:06.470403 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4gcl\" (UniqueName: \"kubernetes.io/projected/21acd4cb-8a1e-40ec-b59a-83a8dd00fd91-kube-api-access-k4gcl\") pod \"certified-operators-f84pq\" (UID: \"21acd4cb-8a1e-40ec-b59a-83a8dd00fd91\") " pod="openshift-marketplace/certified-operators-f84pq" Feb 24 09:35:06 crc kubenswrapper[4822]: I0224 09:35:06.571437 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21acd4cb-8a1e-40ec-b59a-83a8dd00fd91-utilities\") pod \"certified-operators-f84pq\" (UID: \"21acd4cb-8a1e-40ec-b59a-83a8dd00fd91\") " pod="openshift-marketplace/certified-operators-f84pq" Feb 24 09:35:06 crc kubenswrapper[4822]: I0224 09:35:06.571483 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21acd4cb-8a1e-40ec-b59a-83a8dd00fd91-catalog-content\") pod \"certified-operators-f84pq\" (UID: \"21acd4cb-8a1e-40ec-b59a-83a8dd00fd91\") " pod="openshift-marketplace/certified-operators-f84pq" Feb 24 09:35:06 crc kubenswrapper[4822]: I0224 09:35:06.571559 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4gcl\" (UniqueName: \"kubernetes.io/projected/21acd4cb-8a1e-40ec-b59a-83a8dd00fd91-kube-api-access-k4gcl\") pod \"certified-operators-f84pq\" (UID: \"21acd4cb-8a1e-40ec-b59a-83a8dd00fd91\") " pod="openshift-marketplace/certified-operators-f84pq" Feb 24 09:35:06 crc kubenswrapper[4822]: I0224 09:35:06.572250 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21acd4cb-8a1e-40ec-b59a-83a8dd00fd91-utilities\") pod \"certified-operators-f84pq\" (UID: \"21acd4cb-8a1e-40ec-b59a-83a8dd00fd91\") " pod="openshift-marketplace/certified-operators-f84pq" Feb 24 09:35:06 crc kubenswrapper[4822]: I0224 09:35:06.572493 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21acd4cb-8a1e-40ec-b59a-83a8dd00fd91-catalog-content\") pod \"certified-operators-f84pq\" (UID: \"21acd4cb-8a1e-40ec-b59a-83a8dd00fd91\") " pod="openshift-marketplace/certified-operators-f84pq" Feb 24 09:35:06 crc kubenswrapper[4822]: I0224 09:35:06.596105 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4gcl\" (UniqueName: \"kubernetes.io/projected/21acd4cb-8a1e-40ec-b59a-83a8dd00fd91-kube-api-access-k4gcl\") pod \"certified-operators-f84pq\" (UID: \"21acd4cb-8a1e-40ec-b59a-83a8dd00fd91\") " pod="openshift-marketplace/certified-operators-f84pq" Feb 24 09:35:06 crc kubenswrapper[4822]: I0224 09:35:06.694643 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f84pq" Feb 24 09:35:07 crc kubenswrapper[4822]: I0224 09:35:07.182704 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f84pq"] Feb 24 09:35:07 crc kubenswrapper[4822]: W0224 09:35:07.187246 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21acd4cb_8a1e_40ec_b59a_83a8dd00fd91.slice/crio-9eecd1ba1a2e1fee83d06939a7d7221803fc5a921bc4fb608ca66cfb0b92ba64 WatchSource:0}: Error finding container 9eecd1ba1a2e1fee83d06939a7d7221803fc5a921bc4fb608ca66cfb0b92ba64: Status 404 returned error can't find the container with id 9eecd1ba1a2e1fee83d06939a7d7221803fc5a921bc4fb608ca66cfb0b92ba64 Feb 24 09:35:07 crc kubenswrapper[4822]: I0224 09:35:07.699618 4822 generic.go:334] "Generic (PLEG): container finished" podID="21acd4cb-8a1e-40ec-b59a-83a8dd00fd91" containerID="ade6415b4c7c755ae20210df29df8abc2d35889c98dd4f025d3e0fb668aa4af3" exitCode=0 Feb 24 09:35:07 crc kubenswrapper[4822]: I0224 09:35:07.699741 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f84pq" event={"ID":"21acd4cb-8a1e-40ec-b59a-83a8dd00fd91","Type":"ContainerDied","Data":"ade6415b4c7c755ae20210df29df8abc2d35889c98dd4f025d3e0fb668aa4af3"} Feb 24 09:35:07 crc kubenswrapper[4822]: I0224 09:35:07.699783 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f84pq" event={"ID":"21acd4cb-8a1e-40ec-b59a-83a8dd00fd91","Type":"ContainerStarted","Data":"9eecd1ba1a2e1fee83d06939a7d7221803fc5a921bc4fb608ca66cfb0b92ba64"} Feb 24 09:35:08 crc kubenswrapper[4822]: I0224 09:35:08.989518 4822 ???:1] "http: TLS handshake error from 192.168.126.11:41214: no serving certificate available for the kubelet" Feb 24 09:35:09 crc kubenswrapper[4822]: I0224 09:35:09.324458 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9l6vr" Feb 24 09:35:09 crc kubenswrapper[4822]: I0224 09:35:09.324526 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9l6vr" Feb 24 09:35:09 crc kubenswrapper[4822]: I0224 09:35:09.380165 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9l6vr" Feb 24 09:35:09 crc kubenswrapper[4822]: I0224 09:35:09.396536 4822 ???:1] "http: TLS handshake error from 192.168.126.11:41226: no serving certificate available for the kubelet" Feb 24 09:35:09 crc kubenswrapper[4822]: I0224 09:35:09.722790 4822 generic.go:334] "Generic (PLEG): container finished" podID="21acd4cb-8a1e-40ec-b59a-83a8dd00fd91" containerID="ef1e597f095c25aaf3bb3094cbc9f9d3a19d93c8a0ceef36f834dc961f8a9df8" exitCode=0 Feb 24 09:35:09 crc kubenswrapper[4822]: I0224 09:35:09.722873 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f84pq" event={"ID":"21acd4cb-8a1e-40ec-b59a-83a8dd00fd91","Type":"ContainerDied","Data":"ef1e597f095c25aaf3bb3094cbc9f9d3a19d93c8a0ceef36f834dc961f8a9df8"} Feb 24 09:35:09 crc kubenswrapper[4822]: I0224 09:35:09.804519 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9l6vr" Feb 24 09:35:10 crc kubenswrapper[4822]: I0224 09:35:10.730662 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f84pq" event={"ID":"21acd4cb-8a1e-40ec-b59a-83a8dd00fd91","Type":"ContainerStarted","Data":"7fa3cacace4b04f00dd01f4f6d9357b6e6db7752df985e821d26249df60fe387"} Feb 24 09:35:10 crc kubenswrapper[4822]: I0224 09:35:10.748874 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f84pq" podStartSLOduration=2.2893876300000002 podStartE2EDuration="4.748855889s" podCreationTimestamp="2026-02-24 09:35:06 +0000 UTC" firstStartedPulling="2026-02-24 09:35:07.702005687 +0000 UTC m=+1630.089768265" lastFinishedPulling="2026-02-24 09:35:10.161473966 +0000 UTC m=+1632.549236524" observedRunningTime="2026-02-24 09:35:10.744696139 +0000 UTC m=+1633.132458707" watchObservedRunningTime="2026-02-24 09:35:10.748855889 +0000 UTC m=+1633.136618457" Feb 24 09:35:11 crc kubenswrapper[4822]: I0224 09:35:11.365476 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9l6vr"] Feb 24 09:35:11 crc kubenswrapper[4822]: I0224 09:35:11.740187 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9l6vr" podUID="c64cbb19-28f8-40e0-8a8f-c45a4a7cea64" containerName="registry-server" containerID="cri-o://d0bc672cf92c1e009cfa1fcf59565ae7fc1e89f6d6a8dfee626d3934bca2bcc6" gracePeriod=2 Feb 24 09:35:12 crc kubenswrapper[4822]: I0224 09:35:12.055093 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46938: no serving certificate available for the kubelet" Feb 24 09:35:12 crc kubenswrapper[4822]: I0224 09:35:12.249214 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9l6vr" Feb 24 09:35:12 crc kubenswrapper[4822]: I0224 09:35:12.409394 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c64cbb19-28f8-40e0-8a8f-c45a4a7cea64-utilities\") pod \"c64cbb19-28f8-40e0-8a8f-c45a4a7cea64\" (UID: \"c64cbb19-28f8-40e0-8a8f-c45a4a7cea64\") " Feb 24 09:35:12 crc kubenswrapper[4822]: I0224 09:35:12.409841 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44jvx\" (UniqueName: \"kubernetes.io/projected/c64cbb19-28f8-40e0-8a8f-c45a4a7cea64-kube-api-access-44jvx\") pod \"c64cbb19-28f8-40e0-8a8f-c45a4a7cea64\" (UID: \"c64cbb19-28f8-40e0-8a8f-c45a4a7cea64\") " Feb 24 09:35:12 crc kubenswrapper[4822]: I0224 09:35:12.409875 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c64cbb19-28f8-40e0-8a8f-c45a4a7cea64-catalog-content\") pod \"c64cbb19-28f8-40e0-8a8f-c45a4a7cea64\" (UID: \"c64cbb19-28f8-40e0-8a8f-c45a4a7cea64\") " Feb 24 09:35:12 crc kubenswrapper[4822]: I0224 09:35:12.410295 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c64cbb19-28f8-40e0-8a8f-c45a4a7cea64-utilities" (OuterVolumeSpecName: "utilities") pod "c64cbb19-28f8-40e0-8a8f-c45a4a7cea64" (UID: "c64cbb19-28f8-40e0-8a8f-c45a4a7cea64"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:35:12 crc kubenswrapper[4822]: I0224 09:35:12.420108 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c64cbb19-28f8-40e0-8a8f-c45a4a7cea64-kube-api-access-44jvx" (OuterVolumeSpecName: "kube-api-access-44jvx") pod "c64cbb19-28f8-40e0-8a8f-c45a4a7cea64" (UID: "c64cbb19-28f8-40e0-8a8f-c45a4a7cea64"). InnerVolumeSpecName "kube-api-access-44jvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:35:12 crc kubenswrapper[4822]: I0224 09:35:12.439228 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46942: no serving certificate available for the kubelet" Feb 24 09:35:12 crc kubenswrapper[4822]: I0224 09:35:12.464602 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c64cbb19-28f8-40e0-8a8f-c45a4a7cea64-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c64cbb19-28f8-40e0-8a8f-c45a4a7cea64" (UID: "c64cbb19-28f8-40e0-8a8f-c45a4a7cea64"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:35:12 crc kubenswrapper[4822]: I0224 09:35:12.513317 4822 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c64cbb19-28f8-40e0-8a8f-c45a4a7cea64-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:35:12 crc kubenswrapper[4822]: I0224 09:35:12.513362 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44jvx\" (UniqueName: \"kubernetes.io/projected/c64cbb19-28f8-40e0-8a8f-c45a4a7cea64-kube-api-access-44jvx\") on node \"crc\" DevicePath \"\"" Feb 24 09:35:12 crc kubenswrapper[4822]: I0224 09:35:12.513377 4822 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c64cbb19-28f8-40e0-8a8f-c45a4a7cea64-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:35:12 crc kubenswrapper[4822]: I0224 09:35:12.750685 4822 generic.go:334] "Generic (PLEG): container finished" podID="c64cbb19-28f8-40e0-8a8f-c45a4a7cea64" containerID="d0bc672cf92c1e009cfa1fcf59565ae7fc1e89f6d6a8dfee626d3934bca2bcc6" exitCode=0 Feb 24 09:35:12 crc kubenswrapper[4822]: I0224 09:35:12.750728 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9l6vr" event={"ID":"c64cbb19-28f8-40e0-8a8f-c45a4a7cea64","Type":"ContainerDied","Data":"d0bc672cf92c1e009cfa1fcf59565ae7fc1e89f6d6a8dfee626d3934bca2bcc6"} Feb 24 09:35:12 crc kubenswrapper[4822]: I0224 09:35:12.750753 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9l6vr" event={"ID":"c64cbb19-28f8-40e0-8a8f-c45a4a7cea64","Type":"ContainerDied","Data":"8e3d77b969de0dd72fdc2be912d46db0a79c5d3810dae95be0687e56355c9dfb"} Feb 24 09:35:12 crc kubenswrapper[4822]: I0224 09:35:12.750769 4822 scope.go:117] "RemoveContainer" containerID="d0bc672cf92c1e009cfa1fcf59565ae7fc1e89f6d6a8dfee626d3934bca2bcc6" Feb 24 09:35:12 crc kubenswrapper[4822]: I0224 09:35:12.750886 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9l6vr" Feb 24 09:35:12 crc kubenswrapper[4822]: I0224 09:35:12.794488 4822 scope.go:117] "RemoveContainer" containerID="3a541a7adcee5f932bf3ae1b8acffd67296e4c91d932f2b7392b5882a66929b7" Feb 24 09:35:12 crc kubenswrapper[4822]: I0224 09:35:12.795863 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9l6vr"] Feb 24 09:35:12 crc kubenswrapper[4822]: I0224 09:35:12.801531 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9l6vr"] Feb 24 09:35:12 crc kubenswrapper[4822]: I0224 09:35:12.826517 4822 scope.go:117] "RemoveContainer" containerID="ff6a56925f45d738c375c0a547df94a633df61044d8b57191eecc8f3b9577ce7" Feb 24 09:35:12 crc kubenswrapper[4822]: I0224 09:35:12.863270 4822 scope.go:117] "RemoveContainer" containerID="d0bc672cf92c1e009cfa1fcf59565ae7fc1e89f6d6a8dfee626d3934bca2bcc6" Feb 24 09:35:12 crc kubenswrapper[4822]: E0224 09:35:12.863758 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0bc672cf92c1e009cfa1fcf59565ae7fc1e89f6d6a8dfee626d3934bca2bcc6\": container with ID starting with d0bc672cf92c1e009cfa1fcf59565ae7fc1e89f6d6a8dfee626d3934bca2bcc6 not found: ID does not exist" containerID="d0bc672cf92c1e009cfa1fcf59565ae7fc1e89f6d6a8dfee626d3934bca2bcc6" Feb 24 09:35:12 crc kubenswrapper[4822]: I0224 09:35:12.863796 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0bc672cf92c1e009cfa1fcf59565ae7fc1e89f6d6a8dfee626d3934bca2bcc6"} err="failed to get container status \"d0bc672cf92c1e009cfa1fcf59565ae7fc1e89f6d6a8dfee626d3934bca2bcc6\": rpc error: code = NotFound desc = could not find container \"d0bc672cf92c1e009cfa1fcf59565ae7fc1e89f6d6a8dfee626d3934bca2bcc6\": container with ID starting with d0bc672cf92c1e009cfa1fcf59565ae7fc1e89f6d6a8dfee626d3934bca2bcc6 not found: ID does not exist" Feb 24 09:35:12 crc kubenswrapper[4822]: I0224 09:35:12.863821 4822 scope.go:117] "RemoveContainer" containerID="3a541a7adcee5f932bf3ae1b8acffd67296e4c91d932f2b7392b5882a66929b7" Feb 24 09:35:12 crc kubenswrapper[4822]: E0224 09:35:12.864466 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a541a7adcee5f932bf3ae1b8acffd67296e4c91d932f2b7392b5882a66929b7\": container with ID starting with 3a541a7adcee5f932bf3ae1b8acffd67296e4c91d932f2b7392b5882a66929b7 not found: ID does not exist" containerID="3a541a7adcee5f932bf3ae1b8acffd67296e4c91d932f2b7392b5882a66929b7" Feb 24 09:35:12 crc kubenswrapper[4822]: I0224 09:35:12.864537 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a541a7adcee5f932bf3ae1b8acffd67296e4c91d932f2b7392b5882a66929b7"} err="failed to get container status \"3a541a7adcee5f932bf3ae1b8acffd67296e4c91d932f2b7392b5882a66929b7\": rpc error: code = NotFound desc = could not find container \"3a541a7adcee5f932bf3ae1b8acffd67296e4c91d932f2b7392b5882a66929b7\": container with ID starting with 3a541a7adcee5f932bf3ae1b8acffd67296e4c91d932f2b7392b5882a66929b7 not found: ID does not exist" Feb 24 09:35:12 crc kubenswrapper[4822]: I0224 09:35:12.864579 4822 scope.go:117] "RemoveContainer" containerID="ff6a56925f45d738c375c0a547df94a633df61044d8b57191eecc8f3b9577ce7" Feb 24 09:35:12 crc kubenswrapper[4822]: E0224 09:35:12.864969 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff6a56925f45d738c375c0a547df94a633df61044d8b57191eecc8f3b9577ce7\": container with ID starting with ff6a56925f45d738c375c0a547df94a633df61044d8b57191eecc8f3b9577ce7 not found: ID does not exist" containerID="ff6a56925f45d738c375c0a547df94a633df61044d8b57191eecc8f3b9577ce7" Feb 24 09:35:12 crc kubenswrapper[4822]: I0224 09:35:12.865059 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff6a56925f45d738c375c0a547df94a633df61044d8b57191eecc8f3b9577ce7"} err="failed to get container status \"ff6a56925f45d738c375c0a547df94a633df61044d8b57191eecc8f3b9577ce7\": rpc error: code = NotFound desc = could not find container \"ff6a56925f45d738c375c0a547df94a633df61044d8b57191eecc8f3b9577ce7\": container with ID starting with ff6a56925f45d738c375c0a547df94a633df61044d8b57191eecc8f3b9577ce7 not found: ID does not exist" Feb 24 09:35:14 crc kubenswrapper[4822]: I0224 09:35:14.353804 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c64cbb19-28f8-40e0-8a8f-c45a4a7cea64" path="/var/lib/kubelet/pods/c64cbb19-28f8-40e0-8a8f-c45a4a7cea64/volumes" Feb 24 09:35:15 crc kubenswrapper[4822]: I0224 09:35:15.088151 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46952: no serving certificate available for the kubelet" Feb 24 09:35:15 crc kubenswrapper[4822]: I0224 09:35:15.486566 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46960: no serving certificate available for the kubelet" Feb 24 09:35:15 crc kubenswrapper[4822]: I0224 09:35:15.676330 4822 patch_prober.go:28] interesting pod/machine-config-daemon-qd752 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:35:15 crc kubenswrapper[4822]: I0224 09:35:15.676410 4822 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:35:15 crc kubenswrapper[4822]: I0224 09:35:15.676473 4822 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd752" Feb 24 09:35:15 crc kubenswrapper[4822]: I0224 09:35:15.677491 4822 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0a783a49ee9100320d8d40e3dd712347665993b0a5bef661196e07267b06a180"} pod="openshift-machine-config-operator/machine-config-daemon-qd752" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 09:35:15 crc kubenswrapper[4822]: I0224 09:35:15.677605 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" containerID="cri-o://0a783a49ee9100320d8d40e3dd712347665993b0a5bef661196e07267b06a180" gracePeriod=600 Feb 24 09:35:15 crc kubenswrapper[4822]: E0224 09:35:15.801110 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:35:16 crc kubenswrapper[4822]: I0224 09:35:16.695282 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f84pq" Feb 24 09:35:16 crc kubenswrapper[4822]: I0224 09:35:16.695755 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f84pq" Feb 24 09:35:16 crc kubenswrapper[4822]: I0224 09:35:16.774009 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f84pq" Feb 24 09:35:16 crc kubenswrapper[4822]: I0224 09:35:16.827423 4822 generic.go:334] "Generic (PLEG): container finished" podID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerID="0a783a49ee9100320d8d40e3dd712347665993b0a5bef661196e07267b06a180" exitCode=0 Feb 24 09:35:16 crc kubenswrapper[4822]: I0224 09:35:16.830325 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" event={"ID":"306aba52-0b6e-4d3f-b05f-757daebc5e24","Type":"ContainerDied","Data":"0a783a49ee9100320d8d40e3dd712347665993b0a5bef661196e07267b06a180"} Feb 24 09:35:16 crc kubenswrapper[4822]: I0224 09:35:16.830541 4822 scope.go:117] "RemoveContainer" containerID="432ab20c8bdd0b5429b1e771a9c702bb2c3c1068e7afa6a38fb33a2c4a67c545" Feb 24 09:35:16 crc kubenswrapper[4822]: I0224 09:35:16.831321 4822 scope.go:117] "RemoveContainer" containerID="0a783a49ee9100320d8d40e3dd712347665993b0a5bef661196e07267b06a180" Feb 24 09:35:16 crc kubenswrapper[4822]: E0224 09:35:16.831823 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:35:16 crc kubenswrapper[4822]: I0224 09:35:16.914982 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f84pq" Feb 24 09:35:17 crc kubenswrapper[4822]: I0224 09:35:17.766956 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f84pq"] Feb 24 09:35:18 crc kubenswrapper[4822]: I0224 09:35:18.122300 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46976: no serving certificate available for the kubelet" Feb 24 09:35:18 crc kubenswrapper[4822]: I0224 09:35:18.545636 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46988: no serving certificate available for the kubelet" Feb 24 09:35:18 crc kubenswrapper[4822]: I0224 09:35:18.850068 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f84pq" podUID="21acd4cb-8a1e-40ec-b59a-83a8dd00fd91" containerName="registry-server" containerID="cri-o://7fa3cacace4b04f00dd01f4f6d9357b6e6db7752df985e821d26249df60fe387" gracePeriod=2 Feb 24 09:35:19 crc kubenswrapper[4822]: I0224 09:35:19.498028 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f84pq" Feb 24 09:35:19 crc kubenswrapper[4822]: I0224 09:35:19.673528 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21acd4cb-8a1e-40ec-b59a-83a8dd00fd91-utilities\") pod \"21acd4cb-8a1e-40ec-b59a-83a8dd00fd91\" (UID: \"21acd4cb-8a1e-40ec-b59a-83a8dd00fd91\") " Feb 24 09:35:19 crc kubenswrapper[4822]: I0224 09:35:19.673675 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21acd4cb-8a1e-40ec-b59a-83a8dd00fd91-catalog-content\") pod \"21acd4cb-8a1e-40ec-b59a-83a8dd00fd91\" (UID: \"21acd4cb-8a1e-40ec-b59a-83a8dd00fd91\") " Feb 24 09:35:19 crc kubenswrapper[4822]: I0224 09:35:19.673747 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4gcl\" (UniqueName: \"kubernetes.io/projected/21acd4cb-8a1e-40ec-b59a-83a8dd00fd91-kube-api-access-k4gcl\") pod \"21acd4cb-8a1e-40ec-b59a-83a8dd00fd91\" (UID: \"21acd4cb-8a1e-40ec-b59a-83a8dd00fd91\") " Feb 24 09:35:19 crc kubenswrapper[4822]: I0224 09:35:19.674392 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21acd4cb-8a1e-40ec-b59a-83a8dd00fd91-utilities" (OuterVolumeSpecName: "utilities") pod "21acd4cb-8a1e-40ec-b59a-83a8dd00fd91" (UID: "21acd4cb-8a1e-40ec-b59a-83a8dd00fd91"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:35:19 crc kubenswrapper[4822]: I0224 09:35:19.681097 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21acd4cb-8a1e-40ec-b59a-83a8dd00fd91-kube-api-access-k4gcl" (OuterVolumeSpecName: "kube-api-access-k4gcl") pod "21acd4cb-8a1e-40ec-b59a-83a8dd00fd91" (UID: "21acd4cb-8a1e-40ec-b59a-83a8dd00fd91"). InnerVolumeSpecName "kube-api-access-k4gcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:35:19 crc kubenswrapper[4822]: I0224 09:35:19.724242 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21acd4cb-8a1e-40ec-b59a-83a8dd00fd91-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "21acd4cb-8a1e-40ec-b59a-83a8dd00fd91" (UID: "21acd4cb-8a1e-40ec-b59a-83a8dd00fd91"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:35:19 crc kubenswrapper[4822]: I0224 09:35:19.774935 4822 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21acd4cb-8a1e-40ec-b59a-83a8dd00fd91-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:35:19 crc kubenswrapper[4822]: I0224 09:35:19.774967 4822 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21acd4cb-8a1e-40ec-b59a-83a8dd00fd91-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:35:19 crc kubenswrapper[4822]: I0224 09:35:19.774979 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4gcl\" (UniqueName: \"kubernetes.io/projected/21acd4cb-8a1e-40ec-b59a-83a8dd00fd91-kube-api-access-k4gcl\") on node \"crc\" DevicePath \"\"" Feb 24 09:35:19 crc kubenswrapper[4822]: I0224 09:35:19.862305 4822 generic.go:334] "Generic (PLEG): container finished" podID="21acd4cb-8a1e-40ec-b59a-83a8dd00fd91" containerID="7fa3cacace4b04f00dd01f4f6d9357b6e6db7752df985e821d26249df60fe387" exitCode=0 Feb 24 09:35:19 crc kubenswrapper[4822]: I0224 09:35:19.862353 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f84pq" Feb 24 09:35:19 crc kubenswrapper[4822]: I0224 09:35:19.862354 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f84pq" event={"ID":"21acd4cb-8a1e-40ec-b59a-83a8dd00fd91","Type":"ContainerDied","Data":"7fa3cacace4b04f00dd01f4f6d9357b6e6db7752df985e821d26249df60fe387"} Feb 24 09:35:19 crc kubenswrapper[4822]: I0224 09:35:19.862458 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f84pq" event={"ID":"21acd4cb-8a1e-40ec-b59a-83a8dd00fd91","Type":"ContainerDied","Data":"9eecd1ba1a2e1fee83d06939a7d7221803fc5a921bc4fb608ca66cfb0b92ba64"} Feb 24 09:35:19 crc kubenswrapper[4822]: I0224 09:35:19.862480 4822 scope.go:117] "RemoveContainer" containerID="7fa3cacace4b04f00dd01f4f6d9357b6e6db7752df985e821d26249df60fe387" Feb 24 09:35:19 crc kubenswrapper[4822]: I0224 09:35:19.885159 4822 scope.go:117] "RemoveContainer" containerID="ef1e597f095c25aaf3bb3094cbc9f9d3a19d93c8a0ceef36f834dc961f8a9df8" Feb 24 09:35:19 crc kubenswrapper[4822]: I0224 09:35:19.909631 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f84pq"] Feb 24 09:35:19 crc kubenswrapper[4822]: I0224 09:35:19.922524 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f84pq"] Feb 24 09:35:19 crc kubenswrapper[4822]: I0224 09:35:19.926193 4822 scope.go:117] "RemoveContainer" containerID="ade6415b4c7c755ae20210df29df8abc2d35889c98dd4f025d3e0fb668aa4af3" Feb 24 09:35:19 crc kubenswrapper[4822]: I0224 09:35:19.944973 4822 scope.go:117] "RemoveContainer" containerID="7fa3cacace4b04f00dd01f4f6d9357b6e6db7752df985e821d26249df60fe387" Feb 24 09:35:19 crc kubenswrapper[4822]: E0224 09:35:19.945481 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fa3cacace4b04f00dd01f4f6d9357b6e6db7752df985e821d26249df60fe387\": container with ID starting with 7fa3cacace4b04f00dd01f4f6d9357b6e6db7752df985e821d26249df60fe387 not found: ID does not exist" containerID="7fa3cacace4b04f00dd01f4f6d9357b6e6db7752df985e821d26249df60fe387" Feb 24 09:35:19 crc kubenswrapper[4822]: I0224 09:35:19.945534 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fa3cacace4b04f00dd01f4f6d9357b6e6db7752df985e821d26249df60fe387"} err="failed to get container status \"7fa3cacace4b04f00dd01f4f6d9357b6e6db7752df985e821d26249df60fe387\": rpc error: code = NotFound desc = could not find container \"7fa3cacace4b04f00dd01f4f6d9357b6e6db7752df985e821d26249df60fe387\": container with ID starting with 7fa3cacace4b04f00dd01f4f6d9357b6e6db7752df985e821d26249df60fe387 not found: ID does not exist" Feb 24 09:35:19 crc kubenswrapper[4822]: I0224 09:35:19.945566 4822 scope.go:117] "RemoveContainer" containerID="ef1e597f095c25aaf3bb3094cbc9f9d3a19d93c8a0ceef36f834dc961f8a9df8" Feb 24 09:35:19 crc kubenswrapper[4822]: E0224 09:35:19.946704 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef1e597f095c25aaf3bb3094cbc9f9d3a19d93c8a0ceef36f834dc961f8a9df8\": container with ID starting with ef1e597f095c25aaf3bb3094cbc9f9d3a19d93c8a0ceef36f834dc961f8a9df8 not found: ID does not exist" containerID="ef1e597f095c25aaf3bb3094cbc9f9d3a19d93c8a0ceef36f834dc961f8a9df8" Feb 24 09:35:19 crc kubenswrapper[4822]: I0224 09:35:19.946749 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef1e597f095c25aaf3bb3094cbc9f9d3a19d93c8a0ceef36f834dc961f8a9df8"} err="failed to get container status \"ef1e597f095c25aaf3bb3094cbc9f9d3a19d93c8a0ceef36f834dc961f8a9df8\": rpc error: code = NotFound desc = could not find container \"ef1e597f095c25aaf3bb3094cbc9f9d3a19d93c8a0ceef36f834dc961f8a9df8\": container with ID starting with ef1e597f095c25aaf3bb3094cbc9f9d3a19d93c8a0ceef36f834dc961f8a9df8 not found: ID does not exist" Feb 24 09:35:19 crc kubenswrapper[4822]: I0224 09:35:19.946773 4822 scope.go:117] "RemoveContainer" containerID="ade6415b4c7c755ae20210df29df8abc2d35889c98dd4f025d3e0fb668aa4af3" Feb 24 09:35:19 crc kubenswrapper[4822]: E0224 09:35:19.948282 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ade6415b4c7c755ae20210df29df8abc2d35889c98dd4f025d3e0fb668aa4af3\": container with ID starting with ade6415b4c7c755ae20210df29df8abc2d35889c98dd4f025d3e0fb668aa4af3 not found: ID does not exist" containerID="ade6415b4c7c755ae20210df29df8abc2d35889c98dd4f025d3e0fb668aa4af3" Feb 24 09:35:19 crc kubenswrapper[4822]: I0224 09:35:19.948370 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ade6415b4c7c755ae20210df29df8abc2d35889c98dd4f025d3e0fb668aa4af3"} err="failed to get container status \"ade6415b4c7c755ae20210df29df8abc2d35889c98dd4f025d3e0fb668aa4af3\": rpc error: code = NotFound desc = could not find container \"ade6415b4c7c755ae20210df29df8abc2d35889c98dd4f025d3e0fb668aa4af3\": container with ID starting with ade6415b4c7c755ae20210df29df8abc2d35889c98dd4f025d3e0fb668aa4af3 not found: ID does not exist" Feb 24 09:35:20 crc kubenswrapper[4822]: I0224 09:35:20.347767 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21acd4cb-8a1e-40ec-b59a-83a8dd00fd91" path="/var/lib/kubelet/pods/21acd4cb-8a1e-40ec-b59a-83a8dd00fd91/volumes" Feb 24 09:35:21 crc kubenswrapper[4822]: I0224 09:35:21.178319 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53182: no serving certificate available for the kubelet" Feb 24 09:35:21 crc kubenswrapper[4822]: I0224 09:35:21.614450 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53194: no serving certificate available for the kubelet" Feb 24 09:35:24 crc kubenswrapper[4822]: I0224 09:35:24.223780 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53210: no serving certificate available for the kubelet" Feb 24 09:35:24 crc kubenswrapper[4822]: I0224 09:35:24.665297 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53216: no serving certificate available for the kubelet" Feb 24 09:35:27 crc kubenswrapper[4822]: I0224 09:35:27.277389 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53224: no serving certificate available for the kubelet" Feb 24 09:35:27 crc kubenswrapper[4822]: I0224 09:35:27.730651 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53230: no serving certificate available for the kubelet" Feb 24 09:35:30 crc kubenswrapper[4822]: I0224 09:35:30.329628 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53238: no serving certificate available for the kubelet" Feb 24 09:35:30 crc kubenswrapper[4822]: I0224 09:35:30.783818 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53244: no serving certificate available for the kubelet" Feb 24 09:35:31 crc kubenswrapper[4822]: I0224 09:35:31.338036 4822 scope.go:117] "RemoveContainer" containerID="0a783a49ee9100320d8d40e3dd712347665993b0a5bef661196e07267b06a180" Feb 24 09:35:31 crc kubenswrapper[4822]: E0224 09:35:31.338430 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:35:33 crc kubenswrapper[4822]: I0224 09:35:33.381050 4822 ???:1] "http: TLS handshake error from 192.168.126.11:41382: no serving certificate available for the kubelet" Feb 24 09:35:33 crc kubenswrapper[4822]: I0224 09:35:33.838161 4822 ???:1] "http: TLS handshake error from 192.168.126.11:41396: no serving certificate available for the kubelet" Feb 24 09:35:36 crc kubenswrapper[4822]: I0224 09:35:36.442488 4822 ???:1] "http: TLS handshake error from 192.168.126.11:41402: no serving certificate available for the kubelet" Feb 24 09:35:36 crc kubenswrapper[4822]: I0224 09:35:36.895048 4822 ???:1] "http: TLS handshake error from 192.168.126.11:41416: no serving certificate available for the kubelet" Feb 24 09:35:39 crc kubenswrapper[4822]: I0224 09:35:39.484415 4822 ???:1] "http: TLS handshake error from 192.168.126.11:41432: no serving certificate available for the kubelet" Feb 24 09:35:39 crc kubenswrapper[4822]: I0224 09:35:39.952403 4822 ???:1] "http: TLS handshake error from 192.168.126.11:41442: no serving certificate available for the kubelet" Feb 24 09:35:42 crc kubenswrapper[4822]: I0224 09:35:42.535455 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53218: no serving certificate available for the kubelet" Feb 24 09:35:43 crc kubenswrapper[4822]: I0224 09:35:43.011247 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53220: no serving certificate available for the kubelet" Feb 24 09:35:43 crc kubenswrapper[4822]: I0224 09:35:43.337836 4822 scope.go:117] "RemoveContainer" containerID="0a783a49ee9100320d8d40e3dd712347665993b0a5bef661196e07267b06a180" Feb 24 09:35:43 crc kubenswrapper[4822]: E0224 09:35:43.338366 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:35:45 crc kubenswrapper[4822]: I0224 09:35:45.587836 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53236: no serving certificate available for the kubelet" Feb 24 09:35:46 crc kubenswrapper[4822]: I0224 09:35:46.062679 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53240: no serving certificate available for the kubelet" Feb 24 09:35:48 crc kubenswrapper[4822]: I0224 09:35:48.645800 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53246: no serving certificate available for the kubelet" Feb 24 09:35:49 crc kubenswrapper[4822]: I0224 09:35:49.112969 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53250: no serving certificate available for the kubelet" Feb 24 09:35:51 crc kubenswrapper[4822]: I0224 09:35:51.701055 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33698: no serving certificate available for the kubelet" Feb 24 09:35:52 crc kubenswrapper[4822]: I0224 09:35:52.172800 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33714: no serving certificate available for the kubelet" Feb 24 09:35:54 crc kubenswrapper[4822]: I0224 09:35:54.795021 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33716: no serving certificate available for the kubelet" Feb 24 09:35:55 crc kubenswrapper[4822]: I0224 09:35:55.234163 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33728: no serving certificate available for the kubelet" Feb 24 09:35:57 crc kubenswrapper[4822]: I0224 09:35:57.924948 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33742: no serving certificate available for the kubelet" Feb 24 09:35:58 crc kubenswrapper[4822]: I0224 09:35:58.293197 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33758: no serving certificate available for the kubelet" Feb 24 09:35:58 crc kubenswrapper[4822]: I0224 09:35:58.347217 4822 scope.go:117] "RemoveContainer" containerID="0a783a49ee9100320d8d40e3dd712347665993b0a5bef661196e07267b06a180" Feb 24 09:35:58 crc kubenswrapper[4822]: E0224 09:35:58.347763 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:36:00 crc kubenswrapper[4822]: I0224 09:36:00.978324 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33766: no serving certificate available for the kubelet" Feb 24 09:36:01 crc kubenswrapper[4822]: I0224 09:36:01.353085 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53388: no serving certificate available for the kubelet" Feb 24 09:36:04 crc kubenswrapper[4822]: I0224 09:36:04.041708 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53396: no serving certificate available for the kubelet" Feb 24 09:36:04 crc kubenswrapper[4822]: I0224 09:36:04.403581 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53402: no serving certificate available for the kubelet" Feb 24 09:36:07 crc kubenswrapper[4822]: I0224 09:36:07.097836 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53408: no serving certificate available for the kubelet" Feb 24 09:36:07 crc kubenswrapper[4822]: I0224 09:36:07.459016 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53420: no serving certificate available for the kubelet" Feb 24 09:36:10 crc kubenswrapper[4822]: I0224 09:36:10.148393 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53424: no serving certificate available for the kubelet" Feb 24 09:36:10 crc kubenswrapper[4822]: I0224 09:36:10.518100 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53434: no serving certificate available for the kubelet" Feb 24 09:36:11 crc kubenswrapper[4822]: I0224 09:36:11.337897 4822 scope.go:117] "RemoveContainer" containerID="0a783a49ee9100320d8d40e3dd712347665993b0a5bef661196e07267b06a180" Feb 24 09:36:11 crc kubenswrapper[4822]: E0224 09:36:11.338354 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:36:13 crc kubenswrapper[4822]: I0224 09:36:13.222639 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50758: no serving certificate available for the kubelet" Feb 24 09:36:13 crc kubenswrapper[4822]: I0224 09:36:13.581233 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50762: no serving certificate available for the kubelet" Feb 24 09:36:16 crc kubenswrapper[4822]: I0224 09:36:16.281827 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50778: no serving certificate available for the kubelet" Feb 24 09:36:16 crc kubenswrapper[4822]: I0224 09:36:16.643095 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50784: no serving certificate available for the kubelet" Feb 24 09:36:19 crc kubenswrapper[4822]: I0224 09:36:19.345123 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50796: no serving certificate available for the kubelet" Feb 24 09:36:19 crc kubenswrapper[4822]: I0224 09:36:19.706123 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50804: no serving certificate available for the kubelet" Feb 24 09:36:22 crc kubenswrapper[4822]: I0224 09:36:22.405659 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54846: no serving certificate available for the kubelet" Feb 24 09:36:22 crc kubenswrapper[4822]: I0224 09:36:22.760570 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54850: no serving certificate available for the kubelet" Feb 24 09:36:25 crc kubenswrapper[4822]: I0224 09:36:25.463777 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54862: no serving certificate available for the kubelet" Feb 24 09:36:25 crc kubenswrapper[4822]: I0224 09:36:25.800551 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54874: no serving certificate available for the kubelet" Feb 24 09:36:26 crc kubenswrapper[4822]: I0224 09:36:26.337078 4822 scope.go:117] "RemoveContainer" containerID="0a783a49ee9100320d8d40e3dd712347665993b0a5bef661196e07267b06a180" Feb 24 09:36:26 crc kubenswrapper[4822]: E0224 09:36:26.337392 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:36:27 crc kubenswrapper[4822]: I0224 09:36:27.520558 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54882: no serving certificate available for the kubelet" Feb 24 09:36:28 crc kubenswrapper[4822]: I0224 09:36:28.523636 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54886: no serving certificate available for the kubelet" Feb 24 09:36:28 crc kubenswrapper[4822]: I0224 09:36:28.856415 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54900: no serving certificate available for the kubelet" Feb 24 09:36:31 crc kubenswrapper[4822]: I0224 09:36:31.582974 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46848: no serving certificate available for the kubelet" Feb 24 09:36:31 crc kubenswrapper[4822]: I0224 09:36:31.907932 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46864: no serving certificate available for the kubelet" Feb 24 09:36:34 crc kubenswrapper[4822]: I0224 09:36:34.641035 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46880: no serving certificate available for the kubelet" Feb 24 09:36:34 crc kubenswrapper[4822]: I0224 09:36:34.968108 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46888: no serving certificate available for the kubelet" Feb 24 09:36:36 crc kubenswrapper[4822]: I0224 09:36:36.800140 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46894: no serving certificate available for the kubelet" Feb 24 09:36:37 crc kubenswrapper[4822]: I0224 09:36:37.701158 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46896: no serving certificate available for the kubelet" Feb 24 09:36:38 crc kubenswrapper[4822]: I0224 09:36:38.032900 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46902: no serving certificate available for the kubelet" Feb 24 09:36:40 crc kubenswrapper[4822]: I0224 09:36:40.610725 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46906: no serving certificate available for the kubelet" Feb 24 09:36:40 crc kubenswrapper[4822]: I0224 09:36:40.750563 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46918: no serving certificate available for the kubelet" Feb 24 09:36:41 crc kubenswrapper[4822]: I0224 09:36:41.166481 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59764: no serving certificate available for the kubelet" Feb 24 09:36:41 crc kubenswrapper[4822]: I0224 09:36:41.337586 4822 scope.go:117] "RemoveContainer" containerID="0a783a49ee9100320d8d40e3dd712347665993b0a5bef661196e07267b06a180" Feb 24 09:36:41 crc kubenswrapper[4822]: E0224 09:36:41.337864 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:36:43 crc kubenswrapper[4822]: I0224 09:36:43.802067 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59766: no serving certificate available for the kubelet" Feb 24 09:36:44 crc kubenswrapper[4822]: I0224 09:36:44.271037 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59778: no serving certificate available for the kubelet" Feb 24 09:36:46 crc kubenswrapper[4822]: I0224 09:36:46.859491 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59786: no serving certificate available for the kubelet" Feb 24 09:36:47 crc kubenswrapper[4822]: I0224 09:36:47.321522 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59798: no serving certificate available for the kubelet" Feb 24 09:36:49 crc kubenswrapper[4822]: I0224 09:36:49.920815 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59806: no serving certificate available for the kubelet" Feb 24 09:36:50 crc kubenswrapper[4822]: I0224 09:36:50.382563 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59812: no serving certificate available for the kubelet" Feb 24 09:36:52 crc kubenswrapper[4822]: I0224 09:36:52.971558 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47770: no serving certificate available for the kubelet" Feb 24 09:36:53 crc kubenswrapper[4822]: I0224 09:36:53.337759 4822 scope.go:117] "RemoveContainer" containerID="0a783a49ee9100320d8d40e3dd712347665993b0a5bef661196e07267b06a180" Feb 24 09:36:53 crc kubenswrapper[4822]: E0224 09:36:53.338287 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:36:53 crc kubenswrapper[4822]: I0224 09:36:53.443034 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47782: no serving certificate available for the kubelet" Feb 24 09:36:56 crc kubenswrapper[4822]: I0224 09:36:56.037022 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47788: no serving certificate available for the kubelet" Feb 24 09:36:56 crc kubenswrapper[4822]: I0224 09:36:56.488622 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47800: no serving certificate available for the kubelet" Feb 24 09:36:59 crc kubenswrapper[4822]: I0224 09:36:59.088039 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47806: no serving certificate available for the kubelet" Feb 24 09:36:59 crc kubenswrapper[4822]: I0224 09:36:59.538722 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47812: no serving certificate available for the kubelet" Feb 24 09:37:02 crc kubenswrapper[4822]: I0224 09:37:02.144461 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42892: no serving certificate available for the kubelet" Feb 24 09:37:02 crc kubenswrapper[4822]: I0224 09:37:02.583183 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42900: no serving certificate available for the kubelet" Feb 24 09:37:05 crc kubenswrapper[4822]: I0224 09:37:05.193666 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42914: no serving certificate available for the kubelet" Feb 24 09:37:05 crc kubenswrapper[4822]: I0224 09:37:05.338463 4822 scope.go:117] "RemoveContainer" containerID="0a783a49ee9100320d8d40e3dd712347665993b0a5bef661196e07267b06a180" Feb 24 09:37:05 crc kubenswrapper[4822]: E0224 09:37:05.338976 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:37:05 crc kubenswrapper[4822]: I0224 09:37:05.643217 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42920: no serving certificate available for the kubelet" Feb 24 09:37:08 crc kubenswrapper[4822]: I0224 09:37:08.256014 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42928: no serving certificate available for the kubelet" Feb 24 09:37:08 crc kubenswrapper[4822]: I0224 09:37:08.716465 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42944: no serving certificate available for the kubelet" Feb 24 09:37:11 crc kubenswrapper[4822]: I0224 09:37:11.309087 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42520: no serving certificate available for the kubelet" Feb 24 09:37:11 crc kubenswrapper[4822]: I0224 09:37:11.756640 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42536: no serving certificate available for the kubelet" Feb 24 09:37:14 crc kubenswrapper[4822]: I0224 09:37:14.370595 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42542: no serving certificate available for the kubelet" Feb 24 09:37:14 crc kubenswrapper[4822]: I0224 09:37:14.808377 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42546: no serving certificate available for the kubelet" Feb 24 09:37:16 crc kubenswrapper[4822]: I0224 09:37:16.338050 4822 scope.go:117] "RemoveContainer" containerID="0a783a49ee9100320d8d40e3dd712347665993b0a5bef661196e07267b06a180" Feb 24 09:37:16 crc kubenswrapper[4822]: E0224 09:37:16.338636 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:37:17 crc kubenswrapper[4822]: I0224 09:37:17.428245 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42550: no serving certificate available for the kubelet" Feb 24 09:37:17 crc kubenswrapper[4822]: I0224 09:37:17.861603 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42560: no serving certificate available for the kubelet" Feb 24 09:37:20 crc kubenswrapper[4822]: I0224 09:37:20.482346 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42564: no serving certificate available for the kubelet" Feb 24 09:37:20 crc kubenswrapper[4822]: I0224 09:37:20.919014 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42578: no serving certificate available for the kubelet" Feb 24 09:37:23 crc kubenswrapper[4822]: I0224 09:37:23.544774 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34882: no serving certificate available for the kubelet" Feb 24 09:37:24 crc kubenswrapper[4822]: I0224 09:37:24.028541 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34886: no serving certificate available for the kubelet" Feb 24 09:37:24 crc kubenswrapper[4822]: I0224 09:37:24.783463 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/openstack-cell1-galera-0" podUID="3ff049ae-9abb-4477-9f51-eee7228cedfd" containerName="galera" probeResult="failure" output=< Feb 24 09:37:24 crc kubenswrapper[4822]: waiting for gcomm URI Feb 24 09:37:24 crc kubenswrapper[4822]: > Feb 24 09:37:24 crc kubenswrapper[4822]: I0224 09:37:24.783578 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 24 09:37:24 crc kubenswrapper[4822]: I0224 09:37:24.784444 4822 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"4c80c4166b319cb09d4c17d95095bc3b8c3e2970e502448adc74b74c1acaeabd"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed startup probe, will be restarted" Feb 24 09:37:24 crc kubenswrapper[4822]: I0224 09:37:24.871883 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="3ff049ae-9abb-4477-9f51-eee7228cedfd" containerName="galera" containerID="cri-o://4c80c4166b319cb09d4c17d95095bc3b8c3e2970e502448adc74b74c1acaeabd" gracePeriod=30 Feb 24 09:37:25 crc kubenswrapper[4822]: I0224 09:37:25.150895 4822 generic.go:334] "Generic (PLEG): container finished" podID="3ff049ae-9abb-4477-9f51-eee7228cedfd" containerID="4c80c4166b319cb09d4c17d95095bc3b8c3e2970e502448adc74b74c1acaeabd" exitCode=143 Feb 24 09:37:25 crc kubenswrapper[4822]: I0224 09:37:25.150968 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3ff049ae-9abb-4477-9f51-eee7228cedfd","Type":"ContainerDied","Data":"4c80c4166b319cb09d4c17d95095bc3b8c3e2970e502448adc74b74c1acaeabd"} Feb 24 09:37:25 crc kubenswrapper[4822]: I0224 09:37:25.151864 4822 scope.go:117] "RemoveContainer" containerID="7f5e57d03ffaa11b105be338eb80203fd926e436bc685e02891070b18b62f1f4" Feb 24 09:37:26 crc kubenswrapper[4822]: I0224 09:37:26.166168 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3ff049ae-9abb-4477-9f51-eee7228cedfd","Type":"ContainerStarted","Data":"d624bbd1784d52a17fcbd6935bb41e97b5dc3c9ad4decefba9947fb693a63d3f"} Feb 24 09:37:26 crc kubenswrapper[4822]: I0224 09:37:26.590535 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34896: no serving certificate available for the kubelet" Feb 24 09:37:27 crc kubenswrapper[4822]: I0224 09:37:27.077049 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34902: no serving certificate available for the kubelet" Feb 24 09:37:29 crc kubenswrapper[4822]: I0224 09:37:29.656420 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34906: no serving certificate available for the kubelet" Feb 24 09:37:30 crc kubenswrapper[4822]: I0224 09:37:30.131816 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34920: no serving certificate available for the kubelet" Feb 24 09:37:31 crc kubenswrapper[4822]: I0224 09:37:31.261042 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/openstack-galera-0" podUID="92cdd045-d60c-433d-b2e3-32f93299ee8e" containerName="galera" probeResult="failure" output=< Feb 24 09:37:31 crc kubenswrapper[4822]: waiting for gcomm URI Feb 24 09:37:31 crc kubenswrapper[4822]: > Feb 24 09:37:31 crc kubenswrapper[4822]: I0224 09:37:31.261127 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 24 09:37:31 crc kubenswrapper[4822]: I0224 09:37:31.261885 4822 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"0a8786cf20731e3944c79e68631df8d4a02d00586465dfbca243dbb355db0059"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed startup probe, will be restarted" Feb 24 09:37:31 crc kubenswrapper[4822]: I0224 09:37:31.325718 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="92cdd045-d60c-433d-b2e3-32f93299ee8e" containerName="galera" containerID="cri-o://0a8786cf20731e3944c79e68631df8d4a02d00586465dfbca243dbb355db0059" gracePeriod=30 Feb 24 09:37:31 crc kubenswrapper[4822]: I0224 09:37:31.337341 4822 scope.go:117] "RemoveContainer" containerID="0a783a49ee9100320d8d40e3dd712347665993b0a5bef661196e07267b06a180" Feb 24 09:37:31 crc kubenswrapper[4822]: E0224 09:37:31.337764 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:37:32 crc kubenswrapper[4822]: I0224 09:37:32.232106 4822 generic.go:334] "Generic (PLEG): container finished" podID="92cdd045-d60c-433d-b2e3-32f93299ee8e" containerID="0a8786cf20731e3944c79e68631df8d4a02d00586465dfbca243dbb355db0059" exitCode=143 Feb 24 09:37:32 crc kubenswrapper[4822]: I0224 09:37:32.232249 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"92cdd045-d60c-433d-b2e3-32f93299ee8e","Type":"ContainerDied","Data":"0a8786cf20731e3944c79e68631df8d4a02d00586465dfbca243dbb355db0059"} Feb 24 09:37:32 crc kubenswrapper[4822]: I0224 09:37:32.232460 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"92cdd045-d60c-433d-b2e3-32f93299ee8e","Type":"ContainerStarted","Data":"cb5c3867effd9054c31685c8dda918a018f5ba3c49440c0a591d2370425bc55f"} Feb 24 09:37:32 crc kubenswrapper[4822]: I0224 09:37:32.232488 4822 scope.go:117] "RemoveContainer" containerID="715ebd280f2edaae16b304c9710c830e20238fa9e80562afe51d830a9ee205d9" Feb 24 09:37:32 crc kubenswrapper[4822]: I0224 09:37:32.703833 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49738: no serving certificate available for the kubelet" Feb 24 09:37:33 crc kubenswrapper[4822]: I0224 09:37:33.057582 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 24 09:37:33 crc kubenswrapper[4822]: I0224 09:37:33.057626 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 24 09:37:33 crc kubenswrapper[4822]: I0224 09:37:33.183408 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49740: no serving certificate available for the kubelet" Feb 24 09:37:35 crc kubenswrapper[4822]: I0224 09:37:35.763783 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49754: no serving certificate available for the kubelet" Feb 24 09:37:36 crc kubenswrapper[4822]: I0224 09:37:36.246412 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49768: no serving certificate available for the kubelet" Feb 24 09:37:38 crc kubenswrapper[4822]: I0224 09:37:38.817589 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49774: no serving certificate available for the kubelet" Feb 24 09:37:39 crc kubenswrapper[4822]: I0224 09:37:39.324604 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49784: no serving certificate available for the kubelet" Feb 24 09:37:41 crc kubenswrapper[4822]: I0224 09:37:41.616797 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 24 09:37:41 crc kubenswrapper[4822]: I0224 09:37:41.617347 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 24 09:37:41 crc kubenswrapper[4822]: I0224 09:37:41.894657 4822 ???:1] "http: TLS handshake error from 192.168.126.11:55246: no serving certificate available for the kubelet" Feb 24 09:37:42 crc kubenswrapper[4822]: I0224 09:37:42.338826 4822 scope.go:117] "RemoveContainer" containerID="0a783a49ee9100320d8d40e3dd712347665993b0a5bef661196e07267b06a180" Feb 24 09:37:42 crc kubenswrapper[4822]: E0224 09:37:42.339372 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:37:42 crc kubenswrapper[4822]: I0224 09:37:42.382477 4822 ???:1] "http: TLS handshake error from 192.168.126.11:55258: no serving certificate available for the kubelet" Feb 24 09:37:44 crc kubenswrapper[4822]: I0224 09:37:44.957107 4822 ???:1] "http: TLS handshake error from 192.168.126.11:55260: no serving certificate available for the kubelet" Feb 24 09:37:45 crc kubenswrapper[4822]: I0224 09:37:45.436699 4822 ???:1] "http: TLS handshake error from 192.168.126.11:55268: no serving certificate available for the kubelet" Feb 24 09:37:48 crc kubenswrapper[4822]: I0224 09:37:48.020326 4822 ???:1] "http: TLS handshake error from 192.168.126.11:55272: no serving certificate available for the kubelet" Feb 24 09:37:48 crc kubenswrapper[4822]: I0224 09:37:48.499021 4822 ???:1] "http: TLS handshake error from 192.168.126.11:55274: no serving certificate available for the kubelet" Feb 24 09:37:51 crc kubenswrapper[4822]: I0224 09:37:51.079068 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52158: no serving certificate available for the kubelet" Feb 24 09:37:51 crc kubenswrapper[4822]: I0224 09:37:51.568746 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52170: no serving certificate available for the kubelet" Feb 24 09:37:54 crc kubenswrapper[4822]: I0224 09:37:54.132695 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52186: no serving certificate available for the kubelet" Feb 24 09:37:54 crc kubenswrapper[4822]: I0224 09:37:54.629163 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52188: no serving certificate available for the kubelet" Feb 24 09:37:55 crc kubenswrapper[4822]: I0224 09:37:55.337889 4822 scope.go:117] "RemoveContainer" containerID="0a783a49ee9100320d8d40e3dd712347665993b0a5bef661196e07267b06a180" Feb 24 09:37:55 crc kubenswrapper[4822]: E0224 09:37:55.338629 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:37:57 crc kubenswrapper[4822]: I0224 09:37:57.179598 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52190: no serving certificate available for the kubelet" Feb 24 09:37:57 crc kubenswrapper[4822]: I0224 09:37:57.674225 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52200: no serving certificate available for the kubelet" Feb 24 09:38:00 crc kubenswrapper[4822]: I0224 09:38:00.237283 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52202: no serving certificate available for the kubelet" Feb 24 09:38:00 crc kubenswrapper[4822]: I0224 09:38:00.732583 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52212: no serving certificate available for the kubelet" Feb 24 09:38:03 crc kubenswrapper[4822]: I0224 09:38:03.294757 4822 ???:1] "http: TLS handshake error from 192.168.126.11:60534: no serving certificate available for the kubelet" Feb 24 09:38:03 crc kubenswrapper[4822]: I0224 09:38:03.787834 4822 ???:1] "http: TLS handshake error from 192.168.126.11:60538: no serving certificate available for the kubelet" Feb 24 09:38:06 crc kubenswrapper[4822]: I0224 09:38:06.351697 4822 ???:1] "http: TLS handshake error from 192.168.126.11:60550: no serving certificate available for the kubelet" Feb 24 09:38:06 crc kubenswrapper[4822]: I0224 09:38:06.904423 4822 ???:1] "http: TLS handshake error from 192.168.126.11:60552: no serving certificate available for the kubelet" Feb 24 09:38:09 crc kubenswrapper[4822]: I0224 09:38:09.410111 4822 ???:1] "http: TLS handshake error from 192.168.126.11:60560: no serving certificate available for the kubelet" Feb 24 09:38:09 crc kubenswrapper[4822]: I0224 09:38:09.938339 4822 ???:1] "http: TLS handshake error from 192.168.126.11:60574: no serving certificate available for the kubelet" Feb 24 09:38:10 crc kubenswrapper[4822]: I0224 09:38:10.339190 4822 scope.go:117] "RemoveContainer" containerID="0a783a49ee9100320d8d40e3dd712347665993b0a5bef661196e07267b06a180" Feb 24 09:38:10 crc kubenswrapper[4822]: E0224 09:38:10.339613 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:38:12 crc kubenswrapper[4822]: I0224 09:38:12.458611 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58672: no serving certificate available for the kubelet" Feb 24 09:38:12 crc kubenswrapper[4822]: I0224 09:38:12.992310 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58680: no serving certificate available for the kubelet" Feb 24 09:38:15 crc kubenswrapper[4822]: I0224 09:38:15.516713 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58688: no serving certificate available for the kubelet" Feb 24 09:38:16 crc kubenswrapper[4822]: I0224 09:38:16.049939 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58704: no serving certificate available for the kubelet" Feb 24 09:38:18 crc kubenswrapper[4822]: I0224 09:38:18.571972 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58716: no serving certificate available for the kubelet" Feb 24 09:38:19 crc kubenswrapper[4822]: I0224 09:38:19.092215 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58732: no serving certificate available for the kubelet" Feb 24 09:38:21 crc kubenswrapper[4822]: I0224 09:38:21.624552 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36072: no serving certificate available for the kubelet" Feb 24 09:38:22 crc kubenswrapper[4822]: I0224 09:38:22.225386 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36082: no serving certificate available for the kubelet" Feb 24 09:38:23 crc kubenswrapper[4822]: I0224 09:38:23.337863 4822 scope.go:117] "RemoveContainer" containerID="0a783a49ee9100320d8d40e3dd712347665993b0a5bef661196e07267b06a180" Feb 24 09:38:23 crc kubenswrapper[4822]: E0224 09:38:23.339211 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:38:24 crc kubenswrapper[4822]: I0224 09:38:24.687595 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36094: no serving certificate available for the kubelet" Feb 24 09:38:25 crc kubenswrapper[4822]: I0224 09:38:25.283327 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36108: no serving certificate available for the kubelet" Feb 24 09:38:27 crc kubenswrapper[4822]: I0224 09:38:27.742732 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36122: no serving certificate available for the kubelet" Feb 24 09:38:28 crc kubenswrapper[4822]: I0224 09:38:28.343493 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36126: no serving certificate available for the kubelet" Feb 24 09:38:30 crc kubenswrapper[4822]: I0224 09:38:30.796085 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36140: no serving certificate available for the kubelet" Feb 24 09:38:31 crc kubenswrapper[4822]: I0224 09:38:31.404095 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35466: no serving certificate available for the kubelet" Feb 24 09:38:33 crc kubenswrapper[4822]: I0224 09:38:33.844871 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35468: no serving certificate available for the kubelet" Feb 24 09:38:34 crc kubenswrapper[4822]: I0224 09:38:34.458402 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35482: no serving certificate available for the kubelet" Feb 24 09:38:35 crc kubenswrapper[4822]: I0224 09:38:35.338081 4822 scope.go:117] "RemoveContainer" containerID="0a783a49ee9100320d8d40e3dd712347665993b0a5bef661196e07267b06a180" Feb 24 09:38:35 crc kubenswrapper[4822]: E0224 09:38:35.338437 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:38:36 crc kubenswrapper[4822]: I0224 09:38:36.921035 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35494: no serving certificate available for the kubelet" Feb 24 09:38:37 crc kubenswrapper[4822]: I0224 09:38:37.567079 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35498: no serving certificate available for the kubelet" Feb 24 09:38:39 crc kubenswrapper[4822]: I0224 09:38:39.980398 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35508: no serving certificate available for the kubelet" Feb 24 09:38:40 crc kubenswrapper[4822]: I0224 09:38:40.608883 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35518: no serving certificate available for the kubelet" Feb 24 09:38:43 crc kubenswrapper[4822]: I0224 09:38:43.027752 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53252: no serving certificate available for the kubelet" Feb 24 09:38:43 crc kubenswrapper[4822]: I0224 09:38:43.668019 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53258: no serving certificate available for the kubelet" Feb 24 09:38:46 crc kubenswrapper[4822]: I0224 09:38:46.090071 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53262: no serving certificate available for the kubelet" Feb 24 09:38:46 crc kubenswrapper[4822]: I0224 09:38:46.724733 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53278: no serving certificate available for the kubelet" Feb 24 09:38:48 crc kubenswrapper[4822]: I0224 09:38:48.352670 4822 scope.go:117] "RemoveContainer" containerID="0a783a49ee9100320d8d40e3dd712347665993b0a5bef661196e07267b06a180" Feb 24 09:38:48 crc kubenswrapper[4822]: E0224 09:38:48.353522 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:38:49 crc kubenswrapper[4822]: I0224 09:38:49.151549 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53280: no serving certificate available for the kubelet" Feb 24 09:38:49 crc kubenswrapper[4822]: I0224 09:38:49.781627 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53284: no serving certificate available for the kubelet" Feb 24 09:38:52 crc kubenswrapper[4822]: I0224 09:38:52.218787 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39444: no serving certificate available for the kubelet" Feb 24 09:38:52 crc kubenswrapper[4822]: I0224 09:38:52.844047 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39448: no serving certificate available for the kubelet" Feb 24 09:38:55 crc kubenswrapper[4822]: I0224 09:38:55.279424 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39464: no serving certificate available for the kubelet" Feb 24 09:38:55 crc kubenswrapper[4822]: I0224 09:38:55.900909 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39480: no serving certificate available for the kubelet" Feb 24 09:38:58 crc kubenswrapper[4822]: I0224 09:38:58.339656 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39490: no serving certificate available for the kubelet" Feb 24 09:38:58 crc kubenswrapper[4822]: I0224 09:38:58.961411 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39504: no serving certificate available for the kubelet" Feb 24 09:39:00 crc kubenswrapper[4822]: I0224 09:39:00.338333 4822 scope.go:117] "RemoveContainer" containerID="0a783a49ee9100320d8d40e3dd712347665993b0a5bef661196e07267b06a180" Feb 24 09:39:00 crc kubenswrapper[4822]: E0224 09:39:00.338764 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:39:01 crc kubenswrapper[4822]: I0224 09:39:01.410655 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38890: no serving certificate available for the kubelet" Feb 24 09:39:02 crc kubenswrapper[4822]: I0224 09:39:02.016328 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38892: no serving certificate available for the kubelet" Feb 24 09:39:04 crc kubenswrapper[4822]: I0224 09:39:04.468479 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38894: no serving certificate available for the kubelet" Feb 24 09:39:05 crc kubenswrapper[4822]: I0224 09:39:05.126528 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38900: no serving certificate available for the kubelet" Feb 24 09:39:07 crc kubenswrapper[4822]: I0224 09:39:07.520558 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38906: no serving certificate available for the kubelet" Feb 24 09:39:08 crc kubenswrapper[4822]: I0224 09:39:08.172656 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38914: no serving certificate available for the kubelet" Feb 24 09:39:10 crc kubenswrapper[4822]: I0224 09:39:10.565601 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38930: no serving certificate available for the kubelet" Feb 24 09:39:11 crc kubenswrapper[4822]: I0224 09:39:11.231485 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47994: no serving certificate available for the kubelet" Feb 24 09:39:11 crc kubenswrapper[4822]: I0224 09:39:11.338244 4822 scope.go:117] "RemoveContainer" containerID="0a783a49ee9100320d8d40e3dd712347665993b0a5bef661196e07267b06a180" Feb 24 09:39:11 crc kubenswrapper[4822]: E0224 09:39:11.338706 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:39:13 crc kubenswrapper[4822]: I0224 09:39:13.625628 4822 ???:1] "http: TLS handshake error from 192.168.126.11:48002: no serving certificate available for the kubelet" Feb 24 09:39:14 crc kubenswrapper[4822]: I0224 09:39:14.288415 4822 ???:1] "http: TLS handshake error from 192.168.126.11:48006: no serving certificate available for the kubelet" Feb 24 09:39:16 crc kubenswrapper[4822]: I0224 09:39:16.683471 4822 ???:1] "http: TLS handshake error from 192.168.126.11:48010: no serving certificate available for the kubelet" Feb 24 09:39:17 crc kubenswrapper[4822]: I0224 09:39:17.347421 4822 ???:1] "http: TLS handshake error from 192.168.126.11:48024: no serving certificate available for the kubelet" Feb 24 09:39:19 crc kubenswrapper[4822]: I0224 09:39:19.740603 4822 ???:1] "http: TLS handshake error from 192.168.126.11:48032: no serving certificate available for the kubelet" Feb 24 09:39:20 crc kubenswrapper[4822]: I0224 09:39:20.403344 4822 ???:1] "http: TLS handshake error from 192.168.126.11:48036: no serving certificate available for the kubelet" Feb 24 09:39:22 crc kubenswrapper[4822]: I0224 09:39:22.814607 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50890: no serving certificate available for the kubelet" Feb 24 09:39:23 crc kubenswrapper[4822]: I0224 09:39:23.462389 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50900: no serving certificate available for the kubelet" Feb 24 09:39:24 crc kubenswrapper[4822]: I0224 09:39:24.338902 4822 scope.go:117] "RemoveContainer" containerID="0a783a49ee9100320d8d40e3dd712347665993b0a5bef661196e07267b06a180" Feb 24 09:39:24 crc kubenswrapper[4822]: E0224 09:39:24.339888 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:39:25 crc kubenswrapper[4822]: I0224 09:39:25.870364 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50932: no serving certificate available for the kubelet" Feb 24 09:39:26 crc kubenswrapper[4822]: I0224 09:39:26.510990 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50940: no serving certificate available for the kubelet" Feb 24 09:39:28 crc kubenswrapper[4822]: I0224 09:39:28.929526 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50954: no serving certificate available for the kubelet" Feb 24 09:39:29 crc kubenswrapper[4822]: I0224 09:39:29.569549 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50966: no serving certificate available for the kubelet" Feb 24 09:39:31 crc kubenswrapper[4822]: I0224 09:39:31.993089 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37684: no serving certificate available for the kubelet" Feb 24 09:39:32 crc kubenswrapper[4822]: I0224 09:39:32.637672 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37700: no serving certificate available for the kubelet" Feb 24 09:39:33 crc kubenswrapper[4822]: I0224 09:39:33.532333 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-blkrw"] Feb 24 09:39:33 crc kubenswrapper[4822]: E0224 09:39:33.532973 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c64cbb19-28f8-40e0-8a8f-c45a4a7cea64" containerName="extract-utilities" Feb 24 09:39:33 crc kubenswrapper[4822]: I0224 09:39:33.533025 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="c64cbb19-28f8-40e0-8a8f-c45a4a7cea64" containerName="extract-utilities" Feb 24 09:39:33 crc kubenswrapper[4822]: E0224 09:39:33.533052 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c64cbb19-28f8-40e0-8a8f-c45a4a7cea64" containerName="registry-server" Feb 24 09:39:33 crc kubenswrapper[4822]: I0224 09:39:33.533067 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="c64cbb19-28f8-40e0-8a8f-c45a4a7cea64" containerName="registry-server" Feb 24 09:39:33 crc kubenswrapper[4822]: E0224 09:39:33.533100 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21acd4cb-8a1e-40ec-b59a-83a8dd00fd91" containerName="extract-content" Feb 24 09:39:33 crc kubenswrapper[4822]: I0224 09:39:33.533116 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="21acd4cb-8a1e-40ec-b59a-83a8dd00fd91" containerName="extract-content" Feb 24 09:39:33 crc kubenswrapper[4822]: E0224 09:39:33.533153 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21acd4cb-8a1e-40ec-b59a-83a8dd00fd91" containerName="extract-utilities" Feb 24 09:39:33 crc kubenswrapper[4822]: I0224 09:39:33.533172 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="21acd4cb-8a1e-40ec-b59a-83a8dd00fd91" containerName="extract-utilities" Feb 24 09:39:33 crc kubenswrapper[4822]: E0224 09:39:33.533197 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c64cbb19-28f8-40e0-8a8f-c45a4a7cea64" containerName="extract-content" Feb 24 09:39:33 crc kubenswrapper[4822]: I0224 09:39:33.533212 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="c64cbb19-28f8-40e0-8a8f-c45a4a7cea64" containerName="extract-content" Feb 24 09:39:33 crc kubenswrapper[4822]: E0224 09:39:33.533243 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21acd4cb-8a1e-40ec-b59a-83a8dd00fd91" containerName="registry-server" Feb 24 09:39:33 crc kubenswrapper[4822]: I0224 09:39:33.533259 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="21acd4cb-8a1e-40ec-b59a-83a8dd00fd91" containerName="registry-server" Feb 24 09:39:33 crc kubenswrapper[4822]: I0224 09:39:33.533644 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="21acd4cb-8a1e-40ec-b59a-83a8dd00fd91" containerName="registry-server" Feb 24 09:39:33 crc kubenswrapper[4822]: I0224 09:39:33.533676 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="c64cbb19-28f8-40e0-8a8f-c45a4a7cea64" containerName="registry-server" Feb 24 09:39:33 crc kubenswrapper[4822]: I0224 09:39:33.536218 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-blkrw" Feb 24 09:39:33 crc kubenswrapper[4822]: I0224 09:39:33.558826 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-blkrw"] Feb 24 09:39:33 crc kubenswrapper[4822]: I0224 09:39:33.719348 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w95s6\" (UniqueName: \"kubernetes.io/projected/7e069c1e-4e3c-49dd-a88a-f87325c8f4ab-kube-api-access-w95s6\") pod \"redhat-operators-blkrw\" (UID: \"7e069c1e-4e3c-49dd-a88a-f87325c8f4ab\") " pod="openshift-marketplace/redhat-operators-blkrw" Feb 24 09:39:33 crc kubenswrapper[4822]: I0224 09:39:33.719419 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e069c1e-4e3c-49dd-a88a-f87325c8f4ab-catalog-content\") pod \"redhat-operators-blkrw\" (UID: \"7e069c1e-4e3c-49dd-a88a-f87325c8f4ab\") " pod="openshift-marketplace/redhat-operators-blkrw" Feb 24 09:39:33 crc kubenswrapper[4822]: I0224 09:39:33.719499 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e069c1e-4e3c-49dd-a88a-f87325c8f4ab-utilities\") pod \"redhat-operators-blkrw\" (UID: \"7e069c1e-4e3c-49dd-a88a-f87325c8f4ab\") " pod="openshift-marketplace/redhat-operators-blkrw" Feb 24 09:39:33 crc kubenswrapper[4822]: I0224 09:39:33.820882 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e069c1e-4e3c-49dd-a88a-f87325c8f4ab-utilities\") pod \"redhat-operators-blkrw\" (UID: \"7e069c1e-4e3c-49dd-a88a-f87325c8f4ab\") " pod="openshift-marketplace/redhat-operators-blkrw" Feb 24 09:39:33 crc kubenswrapper[4822]: I0224 09:39:33.821044 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w95s6\" (UniqueName: \"kubernetes.io/projected/7e069c1e-4e3c-49dd-a88a-f87325c8f4ab-kube-api-access-w95s6\") pod \"redhat-operators-blkrw\" (UID: \"7e069c1e-4e3c-49dd-a88a-f87325c8f4ab\") " pod="openshift-marketplace/redhat-operators-blkrw" Feb 24 09:39:33 crc kubenswrapper[4822]: I0224 09:39:33.821072 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e069c1e-4e3c-49dd-a88a-f87325c8f4ab-catalog-content\") pod \"redhat-operators-blkrw\" (UID: \"7e069c1e-4e3c-49dd-a88a-f87325c8f4ab\") " pod="openshift-marketplace/redhat-operators-blkrw" Feb 24 09:39:33 crc kubenswrapper[4822]: I0224 09:39:33.821784 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e069c1e-4e3c-49dd-a88a-f87325c8f4ab-catalog-content\") pod \"redhat-operators-blkrw\" (UID: \"7e069c1e-4e3c-49dd-a88a-f87325c8f4ab\") " pod="openshift-marketplace/redhat-operators-blkrw" Feb 24 09:39:33 crc kubenswrapper[4822]: I0224 09:39:33.822122 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e069c1e-4e3c-49dd-a88a-f87325c8f4ab-utilities\") pod \"redhat-operators-blkrw\" (UID: \"7e069c1e-4e3c-49dd-a88a-f87325c8f4ab\") " pod="openshift-marketplace/redhat-operators-blkrw" Feb 24 09:39:33 crc kubenswrapper[4822]: I0224 09:39:33.841033 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w95s6\" (UniqueName: \"kubernetes.io/projected/7e069c1e-4e3c-49dd-a88a-f87325c8f4ab-kube-api-access-w95s6\") pod \"redhat-operators-blkrw\" (UID: \"7e069c1e-4e3c-49dd-a88a-f87325c8f4ab\") " pod="openshift-marketplace/redhat-operators-blkrw" Feb 24 09:39:33 crc kubenswrapper[4822]: I0224 09:39:33.876828 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-blkrw" Feb 24 09:39:34 crc kubenswrapper[4822]: I0224 09:39:34.405519 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-blkrw"] Feb 24 09:39:34 crc kubenswrapper[4822]: I0224 09:39:34.474279 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blkrw" event={"ID":"7e069c1e-4e3c-49dd-a88a-f87325c8f4ab","Type":"ContainerStarted","Data":"5f49f4a240a3b8e5927d8e1c174db6311d1682a3f78b54073a4fab47fec368d7"} Feb 24 09:39:35 crc kubenswrapper[4822]: I0224 09:39:35.033255 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37702: no serving certificate available for the kubelet" Feb 24 09:39:35 crc kubenswrapper[4822]: I0224 09:39:35.486064 4822 generic.go:334] "Generic (PLEG): container finished" podID="7e069c1e-4e3c-49dd-a88a-f87325c8f4ab" containerID="5d620fdd176811a3f1a2a292171857106529c68abb5e4830d1ba6263477f91c1" exitCode=0 Feb 24 09:39:35 crc kubenswrapper[4822]: I0224 09:39:35.486155 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blkrw" event={"ID":"7e069c1e-4e3c-49dd-a88a-f87325c8f4ab","Type":"ContainerDied","Data":"5d620fdd176811a3f1a2a292171857106529c68abb5e4830d1ba6263477f91c1"} Feb 24 09:39:35 crc kubenswrapper[4822]: I0224 09:39:35.732293 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37710: no serving certificate available for the kubelet" Feb 24 09:39:37 crc kubenswrapper[4822]: I0224 09:39:37.505299 4822 generic.go:334] "Generic (PLEG): container finished" podID="7e069c1e-4e3c-49dd-a88a-f87325c8f4ab" containerID="f0455a7eca7b8a5778cdce31b551cb605fbefc4f1855f08b096ef3024b618ab9" exitCode=0 Feb 24 09:39:37 crc kubenswrapper[4822]: I0224 09:39:37.505359 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blkrw" event={"ID":"7e069c1e-4e3c-49dd-a88a-f87325c8f4ab","Type":"ContainerDied","Data":"f0455a7eca7b8a5778cdce31b551cb605fbefc4f1855f08b096ef3024b618ab9"} Feb 24 09:39:38 crc kubenswrapper[4822]: I0224 09:39:38.090475 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37724: no serving certificate available for the kubelet" Feb 24 09:39:38 crc kubenswrapper[4822]: I0224 09:39:38.521187 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blkrw" event={"ID":"7e069c1e-4e3c-49dd-a88a-f87325c8f4ab","Type":"ContainerStarted","Data":"17e0728286e367ad3d9a8fc572fb87ed1815832e207b70598be9d81bfc709a29"} Feb 24 09:39:38 crc kubenswrapper[4822]: I0224 09:39:38.557725 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-blkrw" podStartSLOduration=2.925376857 podStartE2EDuration="5.557697813s" podCreationTimestamp="2026-02-24 09:39:33 +0000 UTC" firstStartedPulling="2026-02-24 09:39:35.488585551 +0000 UTC m=+1897.876348139" lastFinishedPulling="2026-02-24 09:39:38.120906507 +0000 UTC m=+1900.508669095" observedRunningTime="2026-02-24 09:39:38.55121421 +0000 UTC m=+1900.938976848" watchObservedRunningTime="2026-02-24 09:39:38.557697813 +0000 UTC m=+1900.945460401" Feb 24 09:39:38 crc kubenswrapper[4822]: I0224 09:39:38.791246 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37734: no serving certificate available for the kubelet" Feb 24 09:39:39 crc kubenswrapper[4822]: I0224 09:39:39.337711 4822 scope.go:117] "RemoveContainer" containerID="0a783a49ee9100320d8d40e3dd712347665993b0a5bef661196e07267b06a180" Feb 24 09:39:39 crc kubenswrapper[4822]: E0224 09:39:39.338139 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:39:41 crc kubenswrapper[4822]: I0224 09:39:41.154713 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38830: no serving certificate available for the kubelet" Feb 24 09:39:41 crc kubenswrapper[4822]: I0224 09:39:41.852797 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38840: no serving certificate available for the kubelet" Feb 24 09:39:43 crc kubenswrapper[4822]: I0224 09:39:43.877733 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-blkrw" Feb 24 09:39:43 crc kubenswrapper[4822]: I0224 09:39:43.878753 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-blkrw" Feb 24 09:39:44 crc kubenswrapper[4822]: I0224 09:39:44.201750 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38856: no serving certificate available for the kubelet" Feb 24 09:39:44 crc kubenswrapper[4822]: I0224 09:39:44.894013 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38872: no serving certificate available for the kubelet" Feb 24 09:39:44 crc kubenswrapper[4822]: I0224 09:39:44.955837 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-blkrw" podUID="7e069c1e-4e3c-49dd-a88a-f87325c8f4ab" containerName="registry-server" probeResult="failure" output=< Feb 24 09:39:44 crc kubenswrapper[4822]: timeout: failed to connect service ":50051" within 1s Feb 24 09:39:44 crc kubenswrapper[4822]: > Feb 24 09:39:47 crc kubenswrapper[4822]: I0224 09:39:47.255086 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38874: no serving certificate available for the kubelet" Feb 24 09:39:47 crc kubenswrapper[4822]: I0224 09:39:47.942893 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38876: no serving certificate available for the kubelet" Feb 24 09:39:50 crc kubenswrapper[4822]: I0224 09:39:50.335989 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38880: no serving certificate available for the kubelet" Feb 24 09:39:50 crc kubenswrapper[4822]: I0224 09:39:50.997232 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38896: no serving certificate available for the kubelet" Feb 24 09:39:53 crc kubenswrapper[4822]: I0224 09:39:53.337075 4822 scope.go:117] "RemoveContainer" containerID="0a783a49ee9100320d8d40e3dd712347665993b0a5bef661196e07267b06a180" Feb 24 09:39:53 crc kubenswrapper[4822]: E0224 09:39:53.337582 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:39:53 crc kubenswrapper[4822]: I0224 09:39:53.385734 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59770: no serving certificate available for the kubelet" Feb 24 09:39:53 crc kubenswrapper[4822]: I0224 09:39:53.967822 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-blkrw" Feb 24 09:39:54 crc kubenswrapper[4822]: I0224 09:39:54.027951 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-blkrw" Feb 24 09:39:54 crc kubenswrapper[4822]: I0224 09:39:54.043227 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59774: no serving certificate available for the kubelet" Feb 24 09:39:54 crc kubenswrapper[4822]: I0224 09:39:54.206370 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-blkrw"] Feb 24 09:39:55 crc kubenswrapper[4822]: I0224 09:39:55.732297 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-blkrw" podUID="7e069c1e-4e3c-49dd-a88a-f87325c8f4ab" containerName="registry-server" containerID="cri-o://17e0728286e367ad3d9a8fc572fb87ed1815832e207b70598be9d81bfc709a29" gracePeriod=2 Feb 24 09:39:56 crc kubenswrapper[4822]: I0224 09:39:56.437338 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59780: no serving certificate available for the kubelet" Feb 24 09:39:56 crc kubenswrapper[4822]: I0224 09:39:56.740278 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-blkrw" Feb 24 09:39:56 crc kubenswrapper[4822]: I0224 09:39:56.764739 4822 generic.go:334] "Generic (PLEG): container finished" podID="7e069c1e-4e3c-49dd-a88a-f87325c8f4ab" containerID="17e0728286e367ad3d9a8fc572fb87ed1815832e207b70598be9d81bfc709a29" exitCode=0 Feb 24 09:39:56 crc kubenswrapper[4822]: I0224 09:39:56.764804 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blkrw" event={"ID":"7e069c1e-4e3c-49dd-a88a-f87325c8f4ab","Type":"ContainerDied","Data":"17e0728286e367ad3d9a8fc572fb87ed1815832e207b70598be9d81bfc709a29"} Feb 24 09:39:56 crc kubenswrapper[4822]: I0224 09:39:56.764843 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blkrw" event={"ID":"7e069c1e-4e3c-49dd-a88a-f87325c8f4ab","Type":"ContainerDied","Data":"5f49f4a240a3b8e5927d8e1c174db6311d1682a3f78b54073a4fab47fec368d7"} Feb 24 09:39:56 crc kubenswrapper[4822]: I0224 09:39:56.764870 4822 scope.go:117] "RemoveContainer" containerID="17e0728286e367ad3d9a8fc572fb87ed1815832e207b70598be9d81bfc709a29" Feb 24 09:39:56 crc kubenswrapper[4822]: I0224 09:39:56.765072 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-blkrw" Feb 24 09:39:56 crc kubenswrapper[4822]: I0224 09:39:56.800755 4822 scope.go:117] "RemoveContainer" containerID="f0455a7eca7b8a5778cdce31b551cb605fbefc4f1855f08b096ef3024b618ab9" Feb 24 09:39:56 crc kubenswrapper[4822]: I0224 09:39:56.823140 4822 scope.go:117] "RemoveContainer" containerID="5d620fdd176811a3f1a2a292171857106529c68abb5e4830d1ba6263477f91c1" Feb 24 09:39:56 crc kubenswrapper[4822]: I0224 09:39:56.852098 4822 scope.go:117] "RemoveContainer" containerID="17e0728286e367ad3d9a8fc572fb87ed1815832e207b70598be9d81bfc709a29" Feb 24 09:39:56 crc kubenswrapper[4822]: E0224 09:39:56.852525 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17e0728286e367ad3d9a8fc572fb87ed1815832e207b70598be9d81bfc709a29\": container with ID starting with 17e0728286e367ad3d9a8fc572fb87ed1815832e207b70598be9d81bfc709a29 not found: ID does not exist" containerID="17e0728286e367ad3d9a8fc572fb87ed1815832e207b70598be9d81bfc709a29" Feb 24 09:39:56 crc kubenswrapper[4822]: I0224 09:39:56.852556 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17e0728286e367ad3d9a8fc572fb87ed1815832e207b70598be9d81bfc709a29"} err="failed to get container status \"17e0728286e367ad3d9a8fc572fb87ed1815832e207b70598be9d81bfc709a29\": rpc error: code = NotFound desc = could not find container \"17e0728286e367ad3d9a8fc572fb87ed1815832e207b70598be9d81bfc709a29\": container with ID starting with 17e0728286e367ad3d9a8fc572fb87ed1815832e207b70598be9d81bfc709a29 not found: ID does not exist" Feb 24 09:39:56 crc kubenswrapper[4822]: I0224 09:39:56.852576 4822 scope.go:117] "RemoveContainer" containerID="f0455a7eca7b8a5778cdce31b551cb605fbefc4f1855f08b096ef3024b618ab9" Feb 24 09:39:56 crc kubenswrapper[4822]: E0224 09:39:56.852970 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0455a7eca7b8a5778cdce31b551cb605fbefc4f1855f08b096ef3024b618ab9\": container with ID starting with f0455a7eca7b8a5778cdce31b551cb605fbefc4f1855f08b096ef3024b618ab9 not found: ID does not exist" containerID="f0455a7eca7b8a5778cdce31b551cb605fbefc4f1855f08b096ef3024b618ab9" Feb 24 09:39:56 crc kubenswrapper[4822]: I0224 09:39:56.852996 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0455a7eca7b8a5778cdce31b551cb605fbefc4f1855f08b096ef3024b618ab9"} err="failed to get container status \"f0455a7eca7b8a5778cdce31b551cb605fbefc4f1855f08b096ef3024b618ab9\": rpc error: code = NotFound desc = could not find container \"f0455a7eca7b8a5778cdce31b551cb605fbefc4f1855f08b096ef3024b618ab9\": container with ID starting with f0455a7eca7b8a5778cdce31b551cb605fbefc4f1855f08b096ef3024b618ab9 not found: ID does not exist" Feb 24 09:39:56 crc kubenswrapper[4822]: I0224 09:39:56.853016 4822 scope.go:117] "RemoveContainer" containerID="5d620fdd176811a3f1a2a292171857106529c68abb5e4830d1ba6263477f91c1" Feb 24 09:39:56 crc kubenswrapper[4822]: E0224 09:39:56.853238 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d620fdd176811a3f1a2a292171857106529c68abb5e4830d1ba6263477f91c1\": container with ID starting with 5d620fdd176811a3f1a2a292171857106529c68abb5e4830d1ba6263477f91c1 not found: ID does not exist" containerID="5d620fdd176811a3f1a2a292171857106529c68abb5e4830d1ba6263477f91c1" Feb 24 09:39:56 crc kubenswrapper[4822]: I0224 09:39:56.853259 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d620fdd176811a3f1a2a292171857106529c68abb5e4830d1ba6263477f91c1"} err="failed to get container status \"5d620fdd176811a3f1a2a292171857106529c68abb5e4830d1ba6263477f91c1\": rpc error: code = NotFound desc = could not find container \"5d620fdd176811a3f1a2a292171857106529c68abb5e4830d1ba6263477f91c1\": container with ID starting with 5d620fdd176811a3f1a2a292171857106529c68abb5e4830d1ba6263477f91c1 not found: ID does not exist" Feb 24 09:39:56 crc kubenswrapper[4822]: I0224 09:39:56.879951 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e069c1e-4e3c-49dd-a88a-f87325c8f4ab-utilities\") pod \"7e069c1e-4e3c-49dd-a88a-f87325c8f4ab\" (UID: \"7e069c1e-4e3c-49dd-a88a-f87325c8f4ab\") " Feb 24 09:39:56 crc kubenswrapper[4822]: I0224 09:39:56.879994 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e069c1e-4e3c-49dd-a88a-f87325c8f4ab-catalog-content\") pod \"7e069c1e-4e3c-49dd-a88a-f87325c8f4ab\" (UID: \"7e069c1e-4e3c-49dd-a88a-f87325c8f4ab\") " Feb 24 09:39:56 crc kubenswrapper[4822]: I0224 09:39:56.880050 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w95s6\" (UniqueName: \"kubernetes.io/projected/7e069c1e-4e3c-49dd-a88a-f87325c8f4ab-kube-api-access-w95s6\") pod \"7e069c1e-4e3c-49dd-a88a-f87325c8f4ab\" (UID: \"7e069c1e-4e3c-49dd-a88a-f87325c8f4ab\") " Feb 24 09:39:56 crc kubenswrapper[4822]: I0224 09:39:56.881239 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e069c1e-4e3c-49dd-a88a-f87325c8f4ab-utilities" (OuterVolumeSpecName: "utilities") pod "7e069c1e-4e3c-49dd-a88a-f87325c8f4ab" (UID: "7e069c1e-4e3c-49dd-a88a-f87325c8f4ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:39:56 crc kubenswrapper[4822]: I0224 09:39:56.887344 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e069c1e-4e3c-49dd-a88a-f87325c8f4ab-kube-api-access-w95s6" (OuterVolumeSpecName: "kube-api-access-w95s6") pod "7e069c1e-4e3c-49dd-a88a-f87325c8f4ab" (UID: "7e069c1e-4e3c-49dd-a88a-f87325c8f4ab"). InnerVolumeSpecName "kube-api-access-w95s6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:39:56 crc kubenswrapper[4822]: I0224 09:39:56.981673 4822 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e069c1e-4e3c-49dd-a88a-f87325c8f4ab-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:39:56 crc kubenswrapper[4822]: I0224 09:39:56.981712 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w95s6\" (UniqueName: \"kubernetes.io/projected/7e069c1e-4e3c-49dd-a88a-f87325c8f4ab-kube-api-access-w95s6\") on node \"crc\" DevicePath \"\"" Feb 24 09:39:57 crc kubenswrapper[4822]: I0224 09:39:57.042518 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e069c1e-4e3c-49dd-a88a-f87325c8f4ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7e069c1e-4e3c-49dd-a88a-f87325c8f4ab" (UID: "7e069c1e-4e3c-49dd-a88a-f87325c8f4ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:39:57 crc kubenswrapper[4822]: I0224 09:39:57.081480 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59796: no serving certificate available for the kubelet" Feb 24 09:39:57 crc kubenswrapper[4822]: I0224 09:39:57.083276 4822 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e069c1e-4e3c-49dd-a88a-f87325c8f4ab-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:39:57 crc kubenswrapper[4822]: I0224 09:39:57.103354 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-blkrw"] Feb 24 09:39:57 crc kubenswrapper[4822]: I0224 09:39:57.111302 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-blkrw"] Feb 24 09:39:58 crc kubenswrapper[4822]: I0224 09:39:58.356371 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e069c1e-4e3c-49dd-a88a-f87325c8f4ab" path="/var/lib/kubelet/pods/7e069c1e-4e3c-49dd-a88a-f87325c8f4ab/volumes" Feb 24 09:39:59 crc kubenswrapper[4822]: I0224 09:39:59.478696 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59800: no serving certificate available for the kubelet" Feb 24 09:40:00 crc kubenswrapper[4822]: I0224 09:40:00.141615 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59814: no serving certificate available for the kubelet" Feb 24 09:40:02 crc kubenswrapper[4822]: I0224 09:40:02.543994 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47430: no serving certificate available for the kubelet" Feb 24 09:40:03 crc kubenswrapper[4822]: I0224 09:40:03.201164 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47432: no serving certificate available for the kubelet" Feb 24 09:40:05 crc kubenswrapper[4822]: I0224 09:40:05.598313 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47438: no serving certificate available for the kubelet" Feb 24 09:40:06 crc kubenswrapper[4822]: I0224 09:40:06.264571 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47446: no serving certificate available for the kubelet" Feb 24 09:40:07 crc kubenswrapper[4822]: I0224 09:40:07.337619 4822 scope.go:117] "RemoveContainer" containerID="0a783a49ee9100320d8d40e3dd712347665993b0a5bef661196e07267b06a180" Feb 24 09:40:07 crc kubenswrapper[4822]: E0224 09:40:07.338190 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:40:08 crc kubenswrapper[4822]: I0224 09:40:08.662474 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47452: no serving certificate available for the kubelet" Feb 24 09:40:09 crc kubenswrapper[4822]: I0224 09:40:09.320089 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47468: no serving certificate available for the kubelet" Feb 24 09:40:11 crc kubenswrapper[4822]: I0224 09:40:11.718187 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59930: no serving certificate available for the kubelet" Feb 24 09:40:12 crc kubenswrapper[4822]: I0224 09:40:12.381581 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59942: no serving certificate available for the kubelet" Feb 24 09:40:14 crc kubenswrapper[4822]: I0224 09:40:14.771146 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59946: no serving certificate available for the kubelet" Feb 24 09:40:15 crc kubenswrapper[4822]: I0224 09:40:15.439432 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59958: no serving certificate available for the kubelet" Feb 24 09:40:17 crc kubenswrapper[4822]: I0224 09:40:17.834972 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59964: no serving certificate available for the kubelet" Feb 24 09:40:18 crc kubenswrapper[4822]: I0224 09:40:18.499090 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59972: no serving certificate available for the kubelet" Feb 24 09:40:20 crc kubenswrapper[4822]: I0224 09:40:20.337972 4822 scope.go:117] "RemoveContainer" containerID="0a783a49ee9100320d8d40e3dd712347665993b0a5bef661196e07267b06a180" Feb 24 09:40:20 crc kubenswrapper[4822]: I0224 09:40:20.889472 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59982: no serving certificate available for the kubelet" Feb 24 09:40:21 crc kubenswrapper[4822]: I0224 09:40:21.002446 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" event={"ID":"306aba52-0b6e-4d3f-b05f-757daebc5e24","Type":"ContainerStarted","Data":"a7f3dc489737dc9a0f77e32caabd7bee304c720801f3d539e75c66e8325fdbce"} Feb 24 09:40:21 crc kubenswrapper[4822]: I0224 09:40:21.546541 4822 ???:1] "http: TLS handshake error from 192.168.126.11:56050: no serving certificate available for the kubelet" Feb 24 09:40:23 crc kubenswrapper[4822]: I0224 09:40:23.967698 4822 ???:1] "http: TLS handshake error from 192.168.126.11:56060: no serving certificate available for the kubelet" Feb 24 09:40:24 crc kubenswrapper[4822]: I0224 09:40:24.606488 4822 ???:1] "http: TLS handshake error from 192.168.126.11:56076: no serving certificate available for the kubelet" Feb 24 09:40:27 crc kubenswrapper[4822]: I0224 09:40:27.025461 4822 ???:1] "http: TLS handshake error from 192.168.126.11:56092: no serving certificate available for the kubelet" Feb 24 09:40:27 crc kubenswrapper[4822]: I0224 09:40:27.666000 4822 ???:1] "http: TLS handshake error from 192.168.126.11:56104: no serving certificate available for the kubelet" Feb 24 09:40:30 crc kubenswrapper[4822]: I0224 09:40:30.080338 4822 ???:1] "http: TLS handshake error from 192.168.126.11:56112: no serving certificate available for the kubelet" Feb 24 09:40:30 crc kubenswrapper[4822]: I0224 09:40:30.721903 4822 ???:1] "http: TLS handshake error from 192.168.126.11:56126: no serving certificate available for the kubelet" Feb 24 09:40:33 crc kubenswrapper[4822]: I0224 09:40:33.139722 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47428: no serving certificate available for the kubelet" Feb 24 09:40:33 crc kubenswrapper[4822]: I0224 09:40:33.790332 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47440: no serving certificate available for the kubelet" Feb 24 09:40:36 crc kubenswrapper[4822]: I0224 09:40:36.209324 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47444: no serving certificate available for the kubelet" Feb 24 09:40:36 crc kubenswrapper[4822]: I0224 09:40:36.854908 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47456: no serving certificate available for the kubelet" Feb 24 09:40:39 crc kubenswrapper[4822]: I0224 09:40:39.262829 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47466: no serving certificate available for the kubelet" Feb 24 09:40:39 crc kubenswrapper[4822]: I0224 09:40:39.913712 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47474: no serving certificate available for the kubelet" Feb 24 09:40:42 crc kubenswrapper[4822]: I0224 09:40:42.328577 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44560: no serving certificate available for the kubelet" Feb 24 09:40:42 crc kubenswrapper[4822]: I0224 09:40:42.964639 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44564: no serving certificate available for the kubelet" Feb 24 09:40:44 crc kubenswrapper[4822]: E0224 09:40:44.524354 4822 certificate_manager.go:579] "Unhandled Error" err="kubernetes.io/kubelet-serving: certificate request was not signed: timed out waiting for the condition" logger="UnhandledError" Feb 24 09:40:45 crc kubenswrapper[4822]: I0224 09:40:45.389330 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44572: no serving certificate available for the kubelet" Feb 24 09:40:46 crc kubenswrapper[4822]: I0224 09:40:46.015801 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44586: no serving certificate available for the kubelet" Feb 24 09:40:48 crc kubenswrapper[4822]: I0224 09:40:48.497545 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44602: no serving certificate available for the kubelet" Feb 24 09:40:48 crc kubenswrapper[4822]: I0224 09:40:48.773432 4822 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 24 09:40:48 crc kubenswrapper[4822]: I0224 09:40:48.785434 4822 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 24 09:40:48 crc kubenswrapper[4822]: I0224 09:40:48.813145 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44618: no serving certificate available for the kubelet" Feb 24 09:40:48 crc kubenswrapper[4822]: I0224 09:40:48.855207 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44628: no serving certificate available for the kubelet" Feb 24 09:40:48 crc kubenswrapper[4822]: I0224 09:40:48.901797 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44642: no serving certificate available for the kubelet" Feb 24 09:40:48 crc kubenswrapper[4822]: I0224 09:40:48.965508 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44648: no serving certificate available for the kubelet" Feb 24 09:40:49 crc kubenswrapper[4822]: I0224 09:40:49.044689 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44650: no serving certificate available for the kubelet" Feb 24 09:40:49 crc kubenswrapper[4822]: I0224 09:40:49.069715 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44666: no serving certificate available for the kubelet" Feb 24 09:40:49 crc kubenswrapper[4822]: I0224 09:40:49.158389 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44678: no serving certificate available for the kubelet" Feb 24 09:40:49 crc kubenswrapper[4822]: I0224 09:40:49.349187 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44680: no serving certificate available for the kubelet" Feb 24 09:40:49 crc kubenswrapper[4822]: I0224 09:40:49.701826 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44694: no serving certificate available for the kubelet" Feb 24 09:40:50 crc kubenswrapper[4822]: I0224 09:40:50.381323 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44702: no serving certificate available for the kubelet" Feb 24 09:40:51 crc kubenswrapper[4822]: I0224 09:40:51.556790 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45326: no serving certificate available for the kubelet" Feb 24 09:40:51 crc kubenswrapper[4822]: I0224 09:40:51.696307 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45338: no serving certificate available for the kubelet" Feb 24 09:40:52 crc kubenswrapper[4822]: I0224 09:40:52.127048 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45348: no serving certificate available for the kubelet" Feb 24 09:40:54 crc kubenswrapper[4822]: I0224 09:40:54.290164 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45358: no serving certificate available for the kubelet" Feb 24 09:40:54 crc kubenswrapper[4822]: I0224 09:40:54.615904 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45368: no serving certificate available for the kubelet" Feb 24 09:40:55 crc kubenswrapper[4822]: I0224 09:40:55.184516 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45384: no serving certificate available for the kubelet" Feb 24 09:40:57 crc kubenswrapper[4822]: I0224 09:40:57.668416 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45386: no serving certificate available for the kubelet" Feb 24 09:40:58 crc kubenswrapper[4822]: I0224 09:40:58.236351 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45400: no serving certificate available for the kubelet" Feb 24 09:40:59 crc kubenswrapper[4822]: I0224 09:40:59.444422 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45410: no serving certificate available for the kubelet" Feb 24 09:41:00 crc kubenswrapper[4822]: I0224 09:41:00.721793 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45412: no serving certificate available for the kubelet" Feb 24 09:41:01 crc kubenswrapper[4822]: I0224 09:41:01.290719 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43194: no serving certificate available for the kubelet" Feb 24 09:41:03 crc kubenswrapper[4822]: I0224 09:41:03.830981 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43204: no serving certificate available for the kubelet" Feb 24 09:41:04 crc kubenswrapper[4822]: I0224 09:41:04.355969 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43212: no serving certificate available for the kubelet" Feb 24 09:41:06 crc kubenswrapper[4822]: I0224 09:41:06.883860 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43220: no serving certificate available for the kubelet" Feb 24 09:41:07 crc kubenswrapper[4822]: I0224 09:41:07.405250 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43236: no serving certificate available for the kubelet" Feb 24 09:41:09 crc kubenswrapper[4822]: I0224 09:41:09.737430 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43244: no serving certificate available for the kubelet" Feb 24 09:41:09 crc kubenswrapper[4822]: I0224 09:41:09.940738 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43256: no serving certificate available for the kubelet" Feb 24 09:41:10 crc kubenswrapper[4822]: I0224 09:41:10.456091 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43258: no serving certificate available for the kubelet" Feb 24 09:41:12 crc kubenswrapper[4822]: I0224 09:41:12.995057 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57286: no serving certificate available for the kubelet" Feb 24 09:41:13 crc kubenswrapper[4822]: I0224 09:41:13.499303 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57288: no serving certificate available for the kubelet" Feb 24 09:41:16 crc kubenswrapper[4822]: I0224 09:41:16.054544 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57292: no serving certificate available for the kubelet" Feb 24 09:41:16 crc kubenswrapper[4822]: I0224 09:41:16.571288 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57308: no serving certificate available for the kubelet" Feb 24 09:41:19 crc kubenswrapper[4822]: I0224 09:41:19.115826 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57312: no serving certificate available for the kubelet" Feb 24 09:41:19 crc kubenswrapper[4822]: I0224 09:41:19.630657 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57320: no serving certificate available for the kubelet" Feb 24 09:41:22 crc kubenswrapper[4822]: I0224 09:41:22.166502 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33910: no serving certificate available for the kubelet" Feb 24 09:41:22 crc kubenswrapper[4822]: I0224 09:41:22.687906 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33916: no serving certificate available for the kubelet" Feb 24 09:41:25 crc kubenswrapper[4822]: I0224 09:41:25.229549 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33918: no serving certificate available for the kubelet" Feb 24 09:41:25 crc kubenswrapper[4822]: I0224 09:41:25.752458 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33920: no serving certificate available for the kubelet" Feb 24 09:41:28 crc kubenswrapper[4822]: I0224 09:41:28.286275 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33932: no serving certificate available for the kubelet" Feb 24 09:41:28 crc kubenswrapper[4822]: I0224 09:41:28.814064 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33938: no serving certificate available for the kubelet" Feb 24 09:41:30 crc kubenswrapper[4822]: I0224 09:41:30.257079 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33952: no serving certificate available for the kubelet" Feb 24 09:41:31 crc kubenswrapper[4822]: I0224 09:41:31.395715 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45206: no serving certificate available for the kubelet" Feb 24 09:41:31 crc kubenswrapper[4822]: I0224 09:41:31.850591 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45210: no serving certificate available for the kubelet" Feb 24 09:41:34 crc kubenswrapper[4822]: I0224 09:41:34.453547 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45212: no serving certificate available for the kubelet" Feb 24 09:41:34 crc kubenswrapper[4822]: I0224 09:41:34.635018 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/openstack-cell1-galera-0" podUID="3ff049ae-9abb-4477-9f51-eee7228cedfd" containerName="galera" probeResult="failure" output=< Feb 24 09:41:34 crc kubenswrapper[4822]: waiting for gcomm URI Feb 24 09:41:34 crc kubenswrapper[4822]: > Feb 24 09:41:34 crc kubenswrapper[4822]: I0224 09:41:34.635144 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 24 09:41:34 crc kubenswrapper[4822]: I0224 09:41:34.636173 4822 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"d624bbd1784d52a17fcbd6935bb41e97b5dc3c9ad4decefba9947fb693a63d3f"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed startup probe, will be restarted" Feb 24 09:41:34 crc kubenswrapper[4822]: I0224 09:41:34.711841 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="3ff049ae-9abb-4477-9f51-eee7228cedfd" containerName="galera" containerID="cri-o://d624bbd1784d52a17fcbd6935bb41e97b5dc3c9ad4decefba9947fb693a63d3f" gracePeriod=30 Feb 24 09:41:34 crc kubenswrapper[4822]: I0224 09:41:34.906409 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45224: no serving certificate available for the kubelet" Feb 24 09:41:35 crc kubenswrapper[4822]: I0224 09:41:35.766743 4822 generic.go:334] "Generic (PLEG): container finished" podID="3ff049ae-9abb-4477-9f51-eee7228cedfd" containerID="d624bbd1784d52a17fcbd6935bb41e97b5dc3c9ad4decefba9947fb693a63d3f" exitCode=143 Feb 24 09:41:35 crc kubenswrapper[4822]: I0224 09:41:35.766797 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3ff049ae-9abb-4477-9f51-eee7228cedfd","Type":"ContainerDied","Data":"d624bbd1784d52a17fcbd6935bb41e97b5dc3c9ad4decefba9947fb693a63d3f"} Feb 24 09:41:35 crc kubenswrapper[4822]: I0224 09:41:35.766834 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3ff049ae-9abb-4477-9f51-eee7228cedfd","Type":"ContainerStarted","Data":"35907b3d39bb9c85bb0ac876638478962e23a772b0887ea6e2d711c6c90b5ad8"} Feb 24 09:41:35 crc kubenswrapper[4822]: I0224 09:41:35.766865 4822 scope.go:117] "RemoveContainer" containerID="4c80c4166b319cb09d4c17d95095bc3b8c3e2970e502448adc74b74c1acaeabd" Feb 24 09:41:37 crc kubenswrapper[4822]: I0224 09:41:37.501047 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45238: no serving certificate available for the kubelet" Feb 24 09:41:37 crc kubenswrapper[4822]: I0224 09:41:37.965999 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45254: no serving certificate available for the kubelet" Feb 24 09:41:40 crc kubenswrapper[4822]: I0224 09:41:40.550859 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45268: no serving certificate available for the kubelet" Feb 24 09:41:41 crc kubenswrapper[4822]: I0224 09:41:41.023532 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45270: no serving certificate available for the kubelet" Feb 24 09:41:41 crc kubenswrapper[4822]: I0224 09:41:41.152322 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/openstack-galera-0" podUID="92cdd045-d60c-433d-b2e3-32f93299ee8e" containerName="galera" probeResult="failure" output=< Feb 24 09:41:41 crc kubenswrapper[4822]: waiting for gcomm URI Feb 24 09:41:41 crc kubenswrapper[4822]: > Feb 24 09:41:41 crc kubenswrapper[4822]: I0224 09:41:41.152471 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 24 09:41:41 crc kubenswrapper[4822]: I0224 09:41:41.153714 4822 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"cb5c3867effd9054c31685c8dda918a018f5ba3c49440c0a591d2370425bc55f"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed startup probe, will be restarted" Feb 24 09:41:41 crc kubenswrapper[4822]: I0224 09:41:41.241811 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="92cdd045-d60c-433d-b2e3-32f93299ee8e" containerName="galera" containerID="cri-o://cb5c3867effd9054c31685c8dda918a018f5ba3c49440c0a591d2370425bc55f" gracePeriod=30 Feb 24 09:41:42 crc kubenswrapper[4822]: I0224 09:41:42.081761 4822 generic.go:334] "Generic (PLEG): container finished" podID="92cdd045-d60c-433d-b2e3-32f93299ee8e" containerID="cb5c3867effd9054c31685c8dda918a018f5ba3c49440c0a591d2370425bc55f" exitCode=143 Feb 24 09:41:42 crc kubenswrapper[4822]: I0224 09:41:42.081849 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"92cdd045-d60c-433d-b2e3-32f93299ee8e","Type":"ContainerDied","Data":"cb5c3867effd9054c31685c8dda918a018f5ba3c49440c0a591d2370425bc55f"} Feb 24 09:41:42 crc kubenswrapper[4822]: I0224 09:41:42.082127 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"92cdd045-d60c-433d-b2e3-32f93299ee8e","Type":"ContainerStarted","Data":"6b69074a3b428cba81ebaa5ad1ff74cedfa8f1f70d4c7ccaa9e6620c049c5a01"} Feb 24 09:41:42 crc kubenswrapper[4822]: I0224 09:41:42.082152 4822 scope.go:117] "RemoveContainer" containerID="0a8786cf20731e3944c79e68631df8d4a02d00586465dfbca243dbb355db0059" Feb 24 09:41:43 crc kubenswrapper[4822]: I0224 09:41:43.057402 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 24 09:41:43 crc kubenswrapper[4822]: I0224 09:41:43.057470 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 24 09:41:43 crc kubenswrapper[4822]: I0224 09:41:43.602209 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40272: no serving certificate available for the kubelet" Feb 24 09:41:44 crc kubenswrapper[4822]: I0224 09:41:44.082037 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40274: no serving certificate available for the kubelet" Feb 24 09:41:46 crc kubenswrapper[4822]: I0224 09:41:46.667347 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40286: no serving certificate available for the kubelet" Feb 24 09:41:47 crc kubenswrapper[4822]: I0224 09:41:47.144212 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40298: no serving certificate available for the kubelet" Feb 24 09:41:49 crc kubenswrapper[4822]: I0224 09:41:49.727498 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40314: no serving certificate available for the kubelet" Feb 24 09:41:50 crc kubenswrapper[4822]: I0224 09:41:50.202566 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40320: no serving certificate available for the kubelet" Feb 24 09:41:51 crc kubenswrapper[4822]: I0224 09:41:51.616432 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 24 09:41:51 crc kubenswrapper[4822]: I0224 09:41:51.616907 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 24 09:41:52 crc kubenswrapper[4822]: I0224 09:41:52.796020 4822 ???:1] "http: TLS handshake error from 192.168.126.11:32912: no serving certificate available for the kubelet" Feb 24 09:41:53 crc kubenswrapper[4822]: I0224 09:41:53.262078 4822 ???:1] "http: TLS handshake error from 192.168.126.11:32922: no serving certificate available for the kubelet" Feb 24 09:41:55 crc kubenswrapper[4822]: I0224 09:41:55.853781 4822 ???:1] "http: TLS handshake error from 192.168.126.11:32924: no serving certificate available for the kubelet" Feb 24 09:41:56 crc kubenswrapper[4822]: I0224 09:41:56.307591 4822 ???:1] "http: TLS handshake error from 192.168.126.11:32930: no serving certificate available for the kubelet" Feb 24 09:41:58 crc kubenswrapper[4822]: I0224 09:41:58.921594 4822 ???:1] "http: TLS handshake error from 192.168.126.11:32932: no serving certificate available for the kubelet" Feb 24 09:41:59 crc kubenswrapper[4822]: I0224 09:41:59.369731 4822 ???:1] "http: TLS handshake error from 192.168.126.11:32942: no serving certificate available for the kubelet" Feb 24 09:42:01 crc kubenswrapper[4822]: I0224 09:42:01.980002 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54452: no serving certificate available for the kubelet" Feb 24 09:42:02 crc kubenswrapper[4822]: I0224 09:42:02.427346 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54454: no serving certificate available for the kubelet" Feb 24 09:42:05 crc kubenswrapper[4822]: I0224 09:42:05.038869 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54466: no serving certificate available for the kubelet" Feb 24 09:42:05 crc kubenswrapper[4822]: I0224 09:42:05.493608 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54482: no serving certificate available for the kubelet" Feb 24 09:42:08 crc kubenswrapper[4822]: I0224 09:42:08.090672 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54498: no serving certificate available for the kubelet" Feb 24 09:42:08 crc kubenswrapper[4822]: I0224 09:42:08.552510 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54504: no serving certificate available for the kubelet" Feb 24 09:42:11 crc kubenswrapper[4822]: I0224 09:42:11.149512 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39210: no serving certificate available for the kubelet" Feb 24 09:42:11 crc kubenswrapper[4822]: I0224 09:42:11.259636 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39212: no serving certificate available for the kubelet" Feb 24 09:42:11 crc kubenswrapper[4822]: I0224 09:42:11.620048 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39222: no serving certificate available for the kubelet" Feb 24 09:42:14 crc kubenswrapper[4822]: I0224 09:42:14.207073 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39230: no serving certificate available for the kubelet" Feb 24 09:42:14 crc kubenswrapper[4822]: I0224 09:42:14.685136 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39244: no serving certificate available for the kubelet" Feb 24 09:42:17 crc kubenswrapper[4822]: I0224 09:42:17.242505 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39252: no serving certificate available for the kubelet" Feb 24 09:42:17 crc kubenswrapper[4822]: I0224 09:42:17.732707 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39268: no serving certificate available for the kubelet" Feb 24 09:42:20 crc kubenswrapper[4822]: I0224 09:42:20.296731 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39278: no serving certificate available for the kubelet" Feb 24 09:42:20 crc kubenswrapper[4822]: I0224 09:42:20.790604 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39282: no serving certificate available for the kubelet" Feb 24 09:42:23 crc kubenswrapper[4822]: I0224 09:42:23.360101 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47638: no serving certificate available for the kubelet" Feb 24 09:42:23 crc kubenswrapper[4822]: I0224 09:42:23.846367 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47650: no serving certificate available for the kubelet" Feb 24 09:42:26 crc kubenswrapper[4822]: I0224 09:42:26.408941 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47658: no serving certificate available for the kubelet" Feb 24 09:42:26 crc kubenswrapper[4822]: I0224 09:42:26.894562 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47666: no serving certificate available for the kubelet" Feb 24 09:42:29 crc kubenswrapper[4822]: I0224 09:42:29.469994 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47682: no serving certificate available for the kubelet" Feb 24 09:42:29 crc kubenswrapper[4822]: I0224 09:42:29.956473 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47688: no serving certificate available for the kubelet" Feb 24 09:42:32 crc kubenswrapper[4822]: I0224 09:42:32.523262 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40450: no serving certificate available for the kubelet" Feb 24 09:42:33 crc kubenswrapper[4822]: I0224 09:42:33.010689 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40460: no serving certificate available for the kubelet" Feb 24 09:42:35 crc kubenswrapper[4822]: I0224 09:42:35.578581 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40476: no serving certificate available for the kubelet" Feb 24 09:42:36 crc kubenswrapper[4822]: I0224 09:42:36.063216 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40488: no serving certificate available for the kubelet" Feb 24 09:42:38 crc kubenswrapper[4822]: I0224 09:42:38.941474 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40494: no serving certificate available for the kubelet" Feb 24 09:42:39 crc kubenswrapper[4822]: I0224 09:42:39.110153 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40508: no serving certificate available for the kubelet" Feb 24 09:42:41 crc kubenswrapper[4822]: I0224 09:42:41.991492 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44410: no serving certificate available for the kubelet" Feb 24 09:42:42 crc kubenswrapper[4822]: I0224 09:42:42.173051 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44426: no serving certificate available for the kubelet" Feb 24 09:42:45 crc kubenswrapper[4822]: I0224 09:42:45.037171 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44442: no serving certificate available for the kubelet" Feb 24 09:42:45 crc kubenswrapper[4822]: I0224 09:42:45.227609 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44452: no serving certificate available for the kubelet" Feb 24 09:42:45 crc kubenswrapper[4822]: I0224 09:42:45.676801 4822 patch_prober.go:28] interesting pod/machine-config-daemon-qd752 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:42:45 crc kubenswrapper[4822]: I0224 09:42:45.676871 4822 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:42:48 crc kubenswrapper[4822]: I0224 09:42:48.100359 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44468: no serving certificate available for the kubelet" Feb 24 09:42:48 crc kubenswrapper[4822]: I0224 09:42:48.286556 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44472: no serving certificate available for the kubelet" Feb 24 09:42:51 crc kubenswrapper[4822]: I0224 09:42:51.174527 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44558: no serving certificate available for the kubelet" Feb 24 09:42:51 crc kubenswrapper[4822]: I0224 09:42:51.319696 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44560: no serving certificate available for the kubelet" Feb 24 09:42:54 crc kubenswrapper[4822]: I0224 09:42:54.234507 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44576: no serving certificate available for the kubelet" Feb 24 09:42:54 crc kubenswrapper[4822]: I0224 09:42:54.363448 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44590: no serving certificate available for the kubelet" Feb 24 09:42:57 crc kubenswrapper[4822]: I0224 09:42:57.291862 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44606: no serving certificate available for the kubelet" Feb 24 09:42:57 crc kubenswrapper[4822]: I0224 09:42:57.420960 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44608: no serving certificate available for the kubelet" Feb 24 09:43:00 crc kubenswrapper[4822]: I0224 09:43:00.344764 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44624: no serving certificate available for the kubelet" Feb 24 09:43:00 crc kubenswrapper[4822]: I0224 09:43:00.476439 4822 ???:1] "http: TLS handshake error from 192.168.126.11:44640: no serving certificate available for the kubelet" Feb 24 09:43:03 crc kubenswrapper[4822]: I0224 09:43:03.391376 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35316: no serving certificate available for the kubelet" Feb 24 09:43:03 crc kubenswrapper[4822]: I0224 09:43:03.530731 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35320: no serving certificate available for the kubelet" Feb 24 09:43:06 crc kubenswrapper[4822]: I0224 09:43:06.449083 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35332: no serving certificate available for the kubelet" Feb 24 09:43:06 crc kubenswrapper[4822]: I0224 09:43:06.597866 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35346: no serving certificate available for the kubelet" Feb 24 09:43:09 crc kubenswrapper[4822]: I0224 09:43:09.500511 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35360: no serving certificate available for the kubelet" Feb 24 09:43:09 crc kubenswrapper[4822]: I0224 09:43:09.643706 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35376: no serving certificate available for the kubelet" Feb 24 09:43:12 crc kubenswrapper[4822]: I0224 09:43:12.562460 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35750: no serving certificate available for the kubelet" Feb 24 09:43:12 crc kubenswrapper[4822]: I0224 09:43:12.690001 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35752: no serving certificate available for the kubelet" Feb 24 09:43:15 crc kubenswrapper[4822]: I0224 09:43:15.622652 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35764: no serving certificate available for the kubelet" Feb 24 09:43:15 crc kubenswrapper[4822]: I0224 09:43:15.676827 4822 patch_prober.go:28] interesting pod/machine-config-daemon-qd752 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:43:15 crc kubenswrapper[4822]: I0224 09:43:15.677248 4822 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:43:15 crc kubenswrapper[4822]: I0224 09:43:15.748369 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35772: no serving certificate available for the kubelet" Feb 24 09:43:18 crc kubenswrapper[4822]: I0224 09:43:18.683097 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35774: no serving certificate available for the kubelet" Feb 24 09:43:18 crc kubenswrapper[4822]: I0224 09:43:18.804863 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35790: no serving certificate available for the kubelet" Feb 24 09:43:21 crc kubenswrapper[4822]: I0224 09:43:21.742149 4822 ???:1] "http: TLS handshake error from 192.168.126.11:48346: no serving certificate available for the kubelet" Feb 24 09:43:21 crc kubenswrapper[4822]: I0224 09:43:21.911010 4822 ???:1] "http: TLS handshake error from 192.168.126.11:48350: no serving certificate available for the kubelet" Feb 24 09:43:24 crc kubenswrapper[4822]: I0224 09:43:24.801444 4822 ???:1] "http: TLS handshake error from 192.168.126.11:48358: no serving certificate available for the kubelet" Feb 24 09:43:24 crc kubenswrapper[4822]: I0224 09:43:24.970132 4822 ???:1] "http: TLS handshake error from 192.168.126.11:48364: no serving certificate available for the kubelet" Feb 24 09:43:27 crc kubenswrapper[4822]: I0224 09:43:27.850050 4822 ???:1] "http: TLS handshake error from 192.168.126.11:48376: no serving certificate available for the kubelet" Feb 24 09:43:28 crc kubenswrapper[4822]: I0224 09:43:28.027043 4822 ???:1] "http: TLS handshake error from 192.168.126.11:48386: no serving certificate available for the kubelet" Feb 24 09:43:30 crc kubenswrapper[4822]: I0224 09:43:30.911624 4822 ???:1] "http: TLS handshake error from 192.168.126.11:48398: no serving certificate available for the kubelet" Feb 24 09:43:31 crc kubenswrapper[4822]: I0224 09:43:31.090028 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59634: no serving certificate available for the kubelet" Feb 24 09:43:33 crc kubenswrapper[4822]: I0224 09:43:33.221533 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59642: no serving certificate available for the kubelet" Feb 24 09:43:33 crc kubenswrapper[4822]: I0224 09:43:33.977397 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59654: no serving certificate available for the kubelet" Feb 24 09:43:34 crc kubenswrapper[4822]: I0224 09:43:34.142792 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59656: no serving certificate available for the kubelet" Feb 24 09:43:37 crc kubenswrapper[4822]: I0224 09:43:37.026010 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59666: no serving certificate available for the kubelet" Feb 24 09:43:37 crc kubenswrapper[4822]: I0224 09:43:37.196852 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59678: no serving certificate available for the kubelet" Feb 24 09:43:40 crc kubenswrapper[4822]: I0224 09:43:40.072163 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59694: no serving certificate available for the kubelet" Feb 24 09:43:40 crc kubenswrapper[4822]: I0224 09:43:40.247793 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59700: no serving certificate available for the kubelet" Feb 24 09:43:43 crc kubenswrapper[4822]: I0224 09:43:43.128550 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36834: no serving certificate available for the kubelet" Feb 24 09:43:43 crc kubenswrapper[4822]: I0224 09:43:43.304019 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36848: no serving certificate available for the kubelet" Feb 24 09:43:45 crc kubenswrapper[4822]: I0224 09:43:45.677025 4822 patch_prober.go:28] interesting pod/machine-config-daemon-qd752 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:43:45 crc kubenswrapper[4822]: I0224 09:43:45.677388 4822 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:43:45 crc kubenswrapper[4822]: I0224 09:43:45.677446 4822 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd752" Feb 24 09:43:45 crc kubenswrapper[4822]: I0224 09:43:45.678321 4822 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a7f3dc489737dc9a0f77e32caabd7bee304c720801f3d539e75c66e8325fdbce"} pod="openshift-machine-config-operator/machine-config-daemon-qd752" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 09:43:45 crc kubenswrapper[4822]: I0224 09:43:45.678413 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" containerID="cri-o://a7f3dc489737dc9a0f77e32caabd7bee304c720801f3d539e75c66e8325fdbce" gracePeriod=600 Feb 24 09:43:46 crc kubenswrapper[4822]: I0224 09:43:46.176001 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36856: no serving certificate available for the kubelet" Feb 24 09:43:46 crc kubenswrapper[4822]: I0224 09:43:46.354724 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36870: no serving certificate available for the kubelet" Feb 24 09:43:46 crc kubenswrapper[4822]: I0224 09:43:46.567712 4822 generic.go:334] "Generic (PLEG): container finished" podID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerID="a7f3dc489737dc9a0f77e32caabd7bee304c720801f3d539e75c66e8325fdbce" exitCode=0 Feb 24 09:43:46 crc kubenswrapper[4822]: I0224 09:43:46.567775 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" event={"ID":"306aba52-0b6e-4d3f-b05f-757daebc5e24","Type":"ContainerDied","Data":"a7f3dc489737dc9a0f77e32caabd7bee304c720801f3d539e75c66e8325fdbce"} Feb 24 09:43:46 crc kubenswrapper[4822]: I0224 09:43:46.567815 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" event={"ID":"306aba52-0b6e-4d3f-b05f-757daebc5e24","Type":"ContainerStarted","Data":"73d09a66304a086c367c8577265ad701c4cbf69329dfde07a6ab8a962f8b647d"} Feb 24 09:43:46 crc kubenswrapper[4822]: I0224 09:43:46.567840 4822 scope.go:117] "RemoveContainer" containerID="0a783a49ee9100320d8d40e3dd712347665993b0a5bef661196e07267b06a180" Feb 24 09:43:49 crc kubenswrapper[4822]: I0224 09:43:49.227644 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36880: no serving certificate available for the kubelet" Feb 24 09:43:49 crc kubenswrapper[4822]: I0224 09:43:49.410549 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36892: no serving certificate available for the kubelet" Feb 24 09:43:52 crc kubenswrapper[4822]: I0224 09:43:52.294018 4822 ???:1] "http: TLS handshake error from 192.168.126.11:56714: no serving certificate available for the kubelet" Feb 24 09:43:52 crc kubenswrapper[4822]: I0224 09:43:52.465700 4822 ???:1] "http: TLS handshake error from 192.168.126.11:56726: no serving certificate available for the kubelet" Feb 24 09:43:55 crc kubenswrapper[4822]: I0224 09:43:55.340552 4822 ???:1] "http: TLS handshake error from 192.168.126.11:56728: no serving certificate available for the kubelet" Feb 24 09:43:55 crc kubenswrapper[4822]: I0224 09:43:55.502232 4822 ???:1] "http: TLS handshake error from 192.168.126.11:56734: no serving certificate available for the kubelet" Feb 24 09:43:58 crc kubenswrapper[4822]: I0224 09:43:58.395728 4822 ???:1] "http: TLS handshake error from 192.168.126.11:56750: no serving certificate available for the kubelet" Feb 24 09:43:58 crc kubenswrapper[4822]: I0224 09:43:58.556424 4822 ???:1] "http: TLS handshake error from 192.168.126.11:56758: no serving certificate available for the kubelet" Feb 24 09:44:01 crc kubenswrapper[4822]: I0224 09:44:01.443858 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36622: no serving certificate available for the kubelet" Feb 24 09:44:01 crc kubenswrapper[4822]: I0224 09:44:01.595563 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36634: no serving certificate available for the kubelet" Feb 24 09:44:04 crc kubenswrapper[4822]: I0224 09:44:04.492112 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36646: no serving certificate available for the kubelet" Feb 24 09:44:04 crc kubenswrapper[4822]: I0224 09:44:04.637390 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36660: no serving certificate available for the kubelet" Feb 24 09:44:07 crc kubenswrapper[4822]: I0224 09:44:07.553438 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36664: no serving certificate available for the kubelet" Feb 24 09:44:07 crc kubenswrapper[4822]: I0224 09:44:07.681040 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36670: no serving certificate available for the kubelet" Feb 24 09:44:10 crc kubenswrapper[4822]: I0224 09:44:10.600050 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36674: no serving certificate available for the kubelet" Feb 24 09:44:10 crc kubenswrapper[4822]: I0224 09:44:10.723948 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36688: no serving certificate available for the kubelet" Feb 24 09:44:13 crc kubenswrapper[4822]: I0224 09:44:13.656859 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37148: no serving certificate available for the kubelet" Feb 24 09:44:13 crc kubenswrapper[4822]: I0224 09:44:13.776359 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37156: no serving certificate available for the kubelet" Feb 24 09:44:16 crc kubenswrapper[4822]: I0224 09:44:16.714204 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37158: no serving certificate available for the kubelet" Feb 24 09:44:16 crc kubenswrapper[4822]: I0224 09:44:16.834068 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37166: no serving certificate available for the kubelet" Feb 24 09:44:19 crc kubenswrapper[4822]: I0224 09:44:19.764103 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37170: no serving certificate available for the kubelet" Feb 24 09:44:19 crc kubenswrapper[4822]: I0224 09:44:19.888758 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37176: no serving certificate available for the kubelet" Feb 24 09:44:22 crc kubenswrapper[4822]: I0224 09:44:22.818937 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38652: no serving certificate available for the kubelet" Feb 24 09:44:22 crc kubenswrapper[4822]: I0224 09:44:22.936416 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38664: no serving certificate available for the kubelet" Feb 24 09:44:25 crc kubenswrapper[4822]: I0224 09:44:25.872046 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38670: no serving certificate available for the kubelet" Feb 24 09:44:25 crc kubenswrapper[4822]: I0224 09:44:25.993048 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38684: no serving certificate available for the kubelet" Feb 24 09:44:28 crc kubenswrapper[4822]: I0224 09:44:28.932636 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38698: no serving certificate available for the kubelet" Feb 24 09:44:29 crc kubenswrapper[4822]: I0224 09:44:29.043341 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38714: no serving certificate available for the kubelet" Feb 24 09:44:31 crc kubenswrapper[4822]: I0224 09:44:31.981668 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58008: no serving certificate available for the kubelet" Feb 24 09:44:32 crc kubenswrapper[4822]: I0224 09:44:32.096664 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58014: no serving certificate available for the kubelet" Feb 24 09:44:35 crc kubenswrapper[4822]: I0224 09:44:35.038104 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58024: no serving certificate available for the kubelet" Feb 24 09:44:35 crc kubenswrapper[4822]: I0224 09:44:35.155241 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58026: no serving certificate available for the kubelet" Feb 24 09:44:38 crc kubenswrapper[4822]: I0224 09:44:38.455003 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58040: no serving certificate available for the kubelet" Feb 24 09:44:38 crc kubenswrapper[4822]: I0224 09:44:38.500310 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58052: no serving certificate available for the kubelet" Feb 24 09:44:41 crc kubenswrapper[4822]: I0224 09:44:41.499608 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36516: no serving certificate available for the kubelet" Feb 24 09:44:41 crc kubenswrapper[4822]: I0224 09:44:41.544312 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36520: no serving certificate available for the kubelet" Feb 24 09:44:44 crc kubenswrapper[4822]: I0224 09:44:44.558972 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36524: no serving certificate available for the kubelet" Feb 24 09:44:44 crc kubenswrapper[4822]: I0224 09:44:44.612528 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36530: no serving certificate available for the kubelet" Feb 24 09:44:47 crc kubenswrapper[4822]: I0224 09:44:47.614478 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36534: no serving certificate available for the kubelet" Feb 24 09:44:47 crc kubenswrapper[4822]: I0224 09:44:47.659366 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36540: no serving certificate available for the kubelet" Feb 24 09:44:50 crc kubenswrapper[4822]: I0224 09:44:50.667624 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36548: no serving certificate available for the kubelet" Feb 24 09:44:50 crc kubenswrapper[4822]: I0224 09:44:50.719325 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36554: no serving certificate available for the kubelet" Feb 24 09:44:53 crc kubenswrapper[4822]: I0224 09:44:53.725468 4822 ???:1] "http: TLS handshake error from 192.168.126.11:55642: no serving certificate available for the kubelet" Feb 24 09:44:53 crc kubenswrapper[4822]: I0224 09:44:53.791264 4822 ???:1] "http: TLS handshake error from 192.168.126.11:55644: no serving certificate available for the kubelet" Feb 24 09:44:56 crc kubenswrapper[4822]: I0224 09:44:56.775803 4822 ???:1] "http: TLS handshake error from 192.168.126.11:55656: no serving certificate available for the kubelet" Feb 24 09:44:56 crc kubenswrapper[4822]: I0224 09:44:56.848504 4822 ???:1] "http: TLS handshake error from 192.168.126.11:55660: no serving certificate available for the kubelet" Feb 24 09:44:59 crc kubenswrapper[4822]: I0224 09:44:59.833656 4822 ???:1] "http: TLS handshake error from 192.168.126.11:55664: no serving certificate available for the kubelet" Feb 24 09:44:59 crc kubenswrapper[4822]: I0224 09:44:59.912857 4822 ???:1] "http: TLS handshake error from 192.168.126.11:55678: no serving certificate available for the kubelet" Feb 24 09:45:00 crc kubenswrapper[4822]: I0224 09:45:00.161211 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532105-hjqfp"] Feb 24 09:45:00 crc kubenswrapper[4822]: E0224 09:45:00.161589 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e069c1e-4e3c-49dd-a88a-f87325c8f4ab" containerName="registry-server" Feb 24 09:45:00 crc kubenswrapper[4822]: I0224 09:45:00.161610 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e069c1e-4e3c-49dd-a88a-f87325c8f4ab" containerName="registry-server" Feb 24 09:45:00 crc kubenswrapper[4822]: E0224 09:45:00.161645 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e069c1e-4e3c-49dd-a88a-f87325c8f4ab" containerName="extract-utilities" Feb 24 09:45:00 crc kubenswrapper[4822]: I0224 09:45:00.161654 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e069c1e-4e3c-49dd-a88a-f87325c8f4ab" containerName="extract-utilities" Feb 24 09:45:00 crc kubenswrapper[4822]: E0224 09:45:00.161668 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e069c1e-4e3c-49dd-a88a-f87325c8f4ab" containerName="extract-content" Feb 24 09:45:00 crc kubenswrapper[4822]: I0224 09:45:00.161675 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e069c1e-4e3c-49dd-a88a-f87325c8f4ab" containerName="extract-content" Feb 24 09:45:00 crc kubenswrapper[4822]: I0224 09:45:00.161878 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e069c1e-4e3c-49dd-a88a-f87325c8f4ab" containerName="registry-server" Feb 24 09:45:00 crc kubenswrapper[4822]: I0224 09:45:00.162539 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532105-hjqfp" Feb 24 09:45:00 crc kubenswrapper[4822]: I0224 09:45:00.166392 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 24 09:45:00 crc kubenswrapper[4822]: I0224 09:45:00.172627 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 24 09:45:00 crc kubenswrapper[4822]: I0224 09:45:00.179855 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532105-hjqfp"] Feb 24 09:45:00 crc kubenswrapper[4822]: I0224 09:45:00.267140 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5802e90-5ee0-4550-90e8-6bc4a68a9f48-config-volume\") pod \"collect-profiles-29532105-hjqfp\" (UID: \"c5802e90-5ee0-4550-90e8-6bc4a68a9f48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532105-hjqfp" Feb 24 09:45:00 crc kubenswrapper[4822]: I0224 09:45:00.267228 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5802e90-5ee0-4550-90e8-6bc4a68a9f48-secret-volume\") pod \"collect-profiles-29532105-hjqfp\" (UID: \"c5802e90-5ee0-4550-90e8-6bc4a68a9f48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532105-hjqfp" Feb 24 09:45:00 crc kubenswrapper[4822]: I0224 09:45:00.267348 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5llmp\" (UniqueName: \"kubernetes.io/projected/c5802e90-5ee0-4550-90e8-6bc4a68a9f48-kube-api-access-5llmp\") pod \"collect-profiles-29532105-hjqfp\" (UID: \"c5802e90-5ee0-4550-90e8-6bc4a68a9f48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532105-hjqfp" Feb 24 09:45:00 crc kubenswrapper[4822]: I0224 09:45:00.368592 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5802e90-5ee0-4550-90e8-6bc4a68a9f48-config-volume\") pod \"collect-profiles-29532105-hjqfp\" (UID: \"c5802e90-5ee0-4550-90e8-6bc4a68a9f48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532105-hjqfp" Feb 24 09:45:00 crc kubenswrapper[4822]: I0224 09:45:00.368645 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5802e90-5ee0-4550-90e8-6bc4a68a9f48-secret-volume\") pod \"collect-profiles-29532105-hjqfp\" (UID: \"c5802e90-5ee0-4550-90e8-6bc4a68a9f48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532105-hjqfp" Feb 24 09:45:00 crc kubenswrapper[4822]: I0224 09:45:00.368686 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5llmp\" (UniqueName: \"kubernetes.io/projected/c5802e90-5ee0-4550-90e8-6bc4a68a9f48-kube-api-access-5llmp\") pod \"collect-profiles-29532105-hjqfp\" (UID: \"c5802e90-5ee0-4550-90e8-6bc4a68a9f48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532105-hjqfp" Feb 24 09:45:00 crc kubenswrapper[4822]: I0224 09:45:00.370435 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5802e90-5ee0-4550-90e8-6bc4a68a9f48-config-volume\") pod \"collect-profiles-29532105-hjqfp\" (UID: \"c5802e90-5ee0-4550-90e8-6bc4a68a9f48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532105-hjqfp" Feb 24 09:45:00 crc kubenswrapper[4822]: I0224 09:45:00.377319 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5802e90-5ee0-4550-90e8-6bc4a68a9f48-secret-volume\") pod \"collect-profiles-29532105-hjqfp\" (UID: \"c5802e90-5ee0-4550-90e8-6bc4a68a9f48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532105-hjqfp" Feb 24 09:45:00 crc kubenswrapper[4822]: I0224 09:45:00.386580 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5llmp\" (UniqueName: \"kubernetes.io/projected/c5802e90-5ee0-4550-90e8-6bc4a68a9f48-kube-api-access-5llmp\") pod \"collect-profiles-29532105-hjqfp\" (UID: \"c5802e90-5ee0-4550-90e8-6bc4a68a9f48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532105-hjqfp" Feb 24 09:45:00 crc kubenswrapper[4822]: I0224 09:45:00.483371 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532105-hjqfp" Feb 24 09:45:00 crc kubenswrapper[4822]: I0224 09:45:00.986097 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532105-hjqfp"] Feb 24 09:45:01 crc kubenswrapper[4822]: I0224 09:45:01.319693 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532105-hjqfp" event={"ID":"c5802e90-5ee0-4550-90e8-6bc4a68a9f48","Type":"ContainerStarted","Data":"48b7be583a79e068ebcd8ff5da5d2e0225da9436026f9e2e182c9f7ddb39a77d"} Feb 24 09:45:01 crc kubenswrapper[4822]: I0224 09:45:01.319787 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532105-hjqfp" event={"ID":"c5802e90-5ee0-4550-90e8-6bc4a68a9f48","Type":"ContainerStarted","Data":"ceb7cf3cac4e5e34c938b6d8c5437e4d7dac0ca96f9f1c3d4f42c3174fa00abf"} Feb 24 09:45:01 crc kubenswrapper[4822]: I0224 09:45:01.344529 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29532105-hjqfp" podStartSLOduration=1.344504975 podStartE2EDuration="1.344504975s" podCreationTimestamp="2026-02-24 09:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:45:01.341094302 +0000 UTC m=+2223.728856880" watchObservedRunningTime="2026-02-24 09:45:01.344504975 +0000 UTC m=+2223.732267533" Feb 24 09:45:02 crc kubenswrapper[4822]: I0224 09:45:02.328380 4822 generic.go:334] "Generic (PLEG): container finished" podID="c5802e90-5ee0-4550-90e8-6bc4a68a9f48" containerID="48b7be583a79e068ebcd8ff5da5d2e0225da9436026f9e2e182c9f7ddb39a77d" exitCode=0 Feb 24 09:45:02 crc kubenswrapper[4822]: I0224 09:45:02.328462 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532105-hjqfp" event={"ID":"c5802e90-5ee0-4550-90e8-6bc4a68a9f48","Type":"ContainerDied","Data":"48b7be583a79e068ebcd8ff5da5d2e0225da9436026f9e2e182c9f7ddb39a77d"} Feb 24 09:45:02 crc kubenswrapper[4822]: I0224 09:45:02.892659 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33534: no serving certificate available for the kubelet" Feb 24 09:45:02 crc kubenswrapper[4822]: I0224 09:45:02.972500 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33548: no serving certificate available for the kubelet" Feb 24 09:45:03 crc kubenswrapper[4822]: I0224 09:45:03.720814 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532105-hjqfp" Feb 24 09:45:03 crc kubenswrapper[4822]: I0224 09:45:03.724604 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5802e90-5ee0-4550-90e8-6bc4a68a9f48-config-volume\") pod \"c5802e90-5ee0-4550-90e8-6bc4a68a9f48\" (UID: \"c5802e90-5ee0-4550-90e8-6bc4a68a9f48\") " Feb 24 09:45:03 crc kubenswrapper[4822]: I0224 09:45:03.724680 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5802e90-5ee0-4550-90e8-6bc4a68a9f48-secret-volume\") pod \"c5802e90-5ee0-4550-90e8-6bc4a68a9f48\" (UID: \"c5802e90-5ee0-4550-90e8-6bc4a68a9f48\") " Feb 24 09:45:03 crc kubenswrapper[4822]: I0224 09:45:03.724816 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5llmp\" (UniqueName: \"kubernetes.io/projected/c5802e90-5ee0-4550-90e8-6bc4a68a9f48-kube-api-access-5llmp\") pod \"c5802e90-5ee0-4550-90e8-6bc4a68a9f48\" (UID: \"c5802e90-5ee0-4550-90e8-6bc4a68a9f48\") " Feb 24 09:45:03 crc kubenswrapper[4822]: I0224 09:45:03.725736 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5802e90-5ee0-4550-90e8-6bc4a68a9f48-config-volume" (OuterVolumeSpecName: "config-volume") pod "c5802e90-5ee0-4550-90e8-6bc4a68a9f48" (UID: "c5802e90-5ee0-4550-90e8-6bc4a68a9f48"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:45:03 crc kubenswrapper[4822]: I0224 09:45:03.735134 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5802e90-5ee0-4550-90e8-6bc4a68a9f48-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c5802e90-5ee0-4550-90e8-6bc4a68a9f48" (UID: "c5802e90-5ee0-4550-90e8-6bc4a68a9f48"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:45:03 crc kubenswrapper[4822]: I0224 09:45:03.736171 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5802e90-5ee0-4550-90e8-6bc4a68a9f48-kube-api-access-5llmp" (OuterVolumeSpecName: "kube-api-access-5llmp") pod "c5802e90-5ee0-4550-90e8-6bc4a68a9f48" (UID: "c5802e90-5ee0-4550-90e8-6bc4a68a9f48"). InnerVolumeSpecName "kube-api-access-5llmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:45:03 crc kubenswrapper[4822]: I0224 09:45:03.826681 4822 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5802e90-5ee0-4550-90e8-6bc4a68a9f48-config-volume\") on node \"crc\" DevicePath \"\"" Feb 24 09:45:03 crc kubenswrapper[4822]: I0224 09:45:03.826720 4822 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5802e90-5ee0-4550-90e8-6bc4a68a9f48-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 24 09:45:03 crc kubenswrapper[4822]: I0224 09:45:03.826734 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5llmp\" (UniqueName: \"kubernetes.io/projected/c5802e90-5ee0-4550-90e8-6bc4a68a9f48-kube-api-access-5llmp\") on node \"crc\" DevicePath \"\"" Feb 24 09:45:04 crc kubenswrapper[4822]: I0224 09:45:04.347820 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532105-hjqfp" Feb 24 09:45:04 crc kubenswrapper[4822]: I0224 09:45:04.348577 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532105-hjqfp" event={"ID":"c5802e90-5ee0-4550-90e8-6bc4a68a9f48","Type":"ContainerDied","Data":"ceb7cf3cac4e5e34c938b6d8c5437e4d7dac0ca96f9f1c3d4f42c3174fa00abf"} Feb 24 09:45:04 crc kubenswrapper[4822]: I0224 09:45:04.348626 4822 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ceb7cf3cac4e5e34c938b6d8c5437e4d7dac0ca96f9f1c3d4f42c3174fa00abf" Feb 24 09:45:04 crc kubenswrapper[4822]: I0224 09:45:04.817741 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532060-nsnkd"] Feb 24 09:45:04 crc kubenswrapper[4822]: I0224 09:45:04.825306 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532060-nsnkd"] Feb 24 09:45:05 crc kubenswrapper[4822]: I0224 09:45:05.951724 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33564: no serving certificate available for the kubelet" Feb 24 09:45:06 crc kubenswrapper[4822]: I0224 09:45:06.023690 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33566: no serving certificate available for the kubelet" Feb 24 09:45:06 crc kubenswrapper[4822]: I0224 09:45:06.351382 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a50eba2-73f2-4dcb-83a6-a1375a07be13" path="/var/lib/kubelet/pods/9a50eba2-73f2-4dcb-83a6-a1375a07be13/volumes" Feb 24 09:45:08 crc kubenswrapper[4822]: I0224 09:45:08.998046 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33574: no serving certificate available for the kubelet" Feb 24 09:45:09 crc kubenswrapper[4822]: I0224 09:45:09.071693 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33578: no serving certificate available for the kubelet" Feb 24 09:45:12 crc kubenswrapper[4822]: I0224 09:45:12.081698 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57296: no serving certificate available for the kubelet" Feb 24 09:45:12 crc kubenswrapper[4822]: I0224 09:45:12.136377 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57300: no serving certificate available for the kubelet" Feb 24 09:45:15 crc kubenswrapper[4822]: I0224 09:45:15.145066 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57306: no serving certificate available for the kubelet" Feb 24 09:45:15 crc kubenswrapper[4822]: I0224 09:45:15.229079 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57312: no serving certificate available for the kubelet" Feb 24 09:45:18 crc kubenswrapper[4822]: I0224 09:45:18.208855 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57320: no serving certificate available for the kubelet" Feb 24 09:45:18 crc kubenswrapper[4822]: I0224 09:45:18.291291 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57322: no serving certificate available for the kubelet" Feb 24 09:45:21 crc kubenswrapper[4822]: I0224 09:45:21.282365 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33072: no serving certificate available for the kubelet" Feb 24 09:45:21 crc kubenswrapper[4822]: I0224 09:45:21.348492 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33076: no serving certificate available for the kubelet" Feb 24 09:45:24 crc kubenswrapper[4822]: I0224 09:45:24.353580 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33092: no serving certificate available for the kubelet" Feb 24 09:45:24 crc kubenswrapper[4822]: I0224 09:45:24.393985 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33096: no serving certificate available for the kubelet" Feb 24 09:45:25 crc kubenswrapper[4822]: I0224 09:45:25.101004 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-spvr6"] Feb 24 09:45:25 crc kubenswrapper[4822]: E0224 09:45:25.101520 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5802e90-5ee0-4550-90e8-6bc4a68a9f48" containerName="collect-profiles" Feb 24 09:45:25 crc kubenswrapper[4822]: I0224 09:45:25.101549 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5802e90-5ee0-4550-90e8-6bc4a68a9f48" containerName="collect-profiles" Feb 24 09:45:25 crc kubenswrapper[4822]: I0224 09:45:25.101875 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5802e90-5ee0-4550-90e8-6bc4a68a9f48" containerName="collect-profiles" Feb 24 09:45:25 crc kubenswrapper[4822]: I0224 09:45:25.103821 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-spvr6" Feb 24 09:45:25 crc kubenswrapper[4822]: I0224 09:45:25.124420 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-spvr6"] Feb 24 09:45:25 crc kubenswrapper[4822]: I0224 09:45:25.205512 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7bd1955-1e32-4114-85bc-21c1fc43ac95-catalog-content\") pod \"community-operators-spvr6\" (UID: \"a7bd1955-1e32-4114-85bc-21c1fc43ac95\") " pod="openshift-marketplace/community-operators-spvr6" Feb 24 09:45:25 crc kubenswrapper[4822]: I0224 09:45:25.205598 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7bd1955-1e32-4114-85bc-21c1fc43ac95-utilities\") pod \"community-operators-spvr6\" (UID: \"a7bd1955-1e32-4114-85bc-21c1fc43ac95\") " pod="openshift-marketplace/community-operators-spvr6" Feb 24 09:45:25 crc kubenswrapper[4822]: I0224 09:45:25.206085 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfv7l\" (UniqueName: \"kubernetes.io/projected/a7bd1955-1e32-4114-85bc-21c1fc43ac95-kube-api-access-mfv7l\") pod \"community-operators-spvr6\" (UID: \"a7bd1955-1e32-4114-85bc-21c1fc43ac95\") " pod="openshift-marketplace/community-operators-spvr6" Feb 24 09:45:25 crc kubenswrapper[4822]: I0224 09:45:25.308102 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7bd1955-1e32-4114-85bc-21c1fc43ac95-catalog-content\") pod \"community-operators-spvr6\" (UID: \"a7bd1955-1e32-4114-85bc-21c1fc43ac95\") " pod="openshift-marketplace/community-operators-spvr6" Feb 24 09:45:25 crc kubenswrapper[4822]: I0224 09:45:25.308362 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7bd1955-1e32-4114-85bc-21c1fc43ac95-utilities\") pod \"community-operators-spvr6\" (UID: \"a7bd1955-1e32-4114-85bc-21c1fc43ac95\") " pod="openshift-marketplace/community-operators-spvr6" Feb 24 09:45:25 crc kubenswrapper[4822]: I0224 09:45:25.308476 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfv7l\" (UniqueName: \"kubernetes.io/projected/a7bd1955-1e32-4114-85bc-21c1fc43ac95-kube-api-access-mfv7l\") pod \"community-operators-spvr6\" (UID: \"a7bd1955-1e32-4114-85bc-21c1fc43ac95\") " pod="openshift-marketplace/community-operators-spvr6" Feb 24 09:45:25 crc kubenswrapper[4822]: I0224 09:45:25.308873 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7bd1955-1e32-4114-85bc-21c1fc43ac95-utilities\") pod \"community-operators-spvr6\" (UID: \"a7bd1955-1e32-4114-85bc-21c1fc43ac95\") " pod="openshift-marketplace/community-operators-spvr6" Feb 24 09:45:25 crc kubenswrapper[4822]: I0224 09:45:25.309060 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7bd1955-1e32-4114-85bc-21c1fc43ac95-catalog-content\") pod \"community-operators-spvr6\" (UID: \"a7bd1955-1e32-4114-85bc-21c1fc43ac95\") " pod="openshift-marketplace/community-operators-spvr6" Feb 24 09:45:25 crc kubenswrapper[4822]: I0224 09:45:25.329366 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfv7l\" (UniqueName: \"kubernetes.io/projected/a7bd1955-1e32-4114-85bc-21c1fc43ac95-kube-api-access-mfv7l\") pod \"community-operators-spvr6\" (UID: \"a7bd1955-1e32-4114-85bc-21c1fc43ac95\") " pod="openshift-marketplace/community-operators-spvr6" Feb 24 09:45:25 crc kubenswrapper[4822]: I0224 09:45:25.437257 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-spvr6" Feb 24 09:45:25 crc kubenswrapper[4822]: I0224 09:45:25.972411 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-spvr6"] Feb 24 09:45:26 crc kubenswrapper[4822]: I0224 09:45:26.572220 4822 generic.go:334] "Generic (PLEG): container finished" podID="a7bd1955-1e32-4114-85bc-21c1fc43ac95" containerID="51c07f0ab0d5d1ec523102609fa82b05b48964d5aec9698d3b3b727782060182" exitCode=0 Feb 24 09:45:26 crc kubenswrapper[4822]: I0224 09:45:26.572283 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-spvr6" event={"ID":"a7bd1955-1e32-4114-85bc-21c1fc43ac95","Type":"ContainerDied","Data":"51c07f0ab0d5d1ec523102609fa82b05b48964d5aec9698d3b3b727782060182"} Feb 24 09:45:26 crc kubenswrapper[4822]: I0224 09:45:26.572712 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-spvr6" event={"ID":"a7bd1955-1e32-4114-85bc-21c1fc43ac95","Type":"ContainerStarted","Data":"1c5fa42e5c9ecf8c9c45fa1c19e12af343fcb8f7015d47315f6b9c7cc44daf17"} Feb 24 09:45:26 crc kubenswrapper[4822]: I0224 09:45:26.574410 4822 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 09:45:27 crc kubenswrapper[4822]: I0224 09:45:27.406346 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33106: no serving certificate available for the kubelet" Feb 24 09:45:27 crc kubenswrapper[4822]: I0224 09:45:27.456673 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33110: no serving certificate available for the kubelet" Feb 24 09:45:27 crc kubenswrapper[4822]: I0224 09:45:27.493148 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q2kgj"] Feb 24 09:45:27 crc kubenswrapper[4822]: I0224 09:45:27.495045 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q2kgj" Feb 24 09:45:27 crc kubenswrapper[4822]: I0224 09:45:27.506378 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q2kgj"] Feb 24 09:45:27 crc kubenswrapper[4822]: I0224 09:45:27.547253 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgn2q\" (UniqueName: \"kubernetes.io/projected/48d8c8d6-7a83-49d6-a79a-f97343e4868c-kube-api-access-lgn2q\") pod \"redhat-marketplace-q2kgj\" (UID: \"48d8c8d6-7a83-49d6-a79a-f97343e4868c\") " pod="openshift-marketplace/redhat-marketplace-q2kgj" Feb 24 09:45:27 crc kubenswrapper[4822]: I0224 09:45:27.547442 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48d8c8d6-7a83-49d6-a79a-f97343e4868c-utilities\") pod \"redhat-marketplace-q2kgj\" (UID: \"48d8c8d6-7a83-49d6-a79a-f97343e4868c\") " pod="openshift-marketplace/redhat-marketplace-q2kgj" Feb 24 09:45:27 crc kubenswrapper[4822]: I0224 09:45:27.547480 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48d8c8d6-7a83-49d6-a79a-f97343e4868c-catalog-content\") pod \"redhat-marketplace-q2kgj\" (UID: \"48d8c8d6-7a83-49d6-a79a-f97343e4868c\") " pod="openshift-marketplace/redhat-marketplace-q2kgj" Feb 24 09:45:27 crc kubenswrapper[4822]: I0224 09:45:27.582420 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-spvr6" event={"ID":"a7bd1955-1e32-4114-85bc-21c1fc43ac95","Type":"ContainerStarted","Data":"66d20189f74f95b9ebf93e72f204724d3319b831418c125103def335817468f2"} Feb 24 09:45:27 crc kubenswrapper[4822]: I0224 09:45:27.649251 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48d8c8d6-7a83-49d6-a79a-f97343e4868c-utilities\") pod \"redhat-marketplace-q2kgj\" (UID: \"48d8c8d6-7a83-49d6-a79a-f97343e4868c\") " pod="openshift-marketplace/redhat-marketplace-q2kgj" Feb 24 09:45:27 crc kubenswrapper[4822]: I0224 09:45:27.649305 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48d8c8d6-7a83-49d6-a79a-f97343e4868c-catalog-content\") pod \"redhat-marketplace-q2kgj\" (UID: \"48d8c8d6-7a83-49d6-a79a-f97343e4868c\") " pod="openshift-marketplace/redhat-marketplace-q2kgj" Feb 24 09:45:27 crc kubenswrapper[4822]: I0224 09:45:27.649388 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgn2q\" (UniqueName: \"kubernetes.io/projected/48d8c8d6-7a83-49d6-a79a-f97343e4868c-kube-api-access-lgn2q\") pod \"redhat-marketplace-q2kgj\" (UID: \"48d8c8d6-7a83-49d6-a79a-f97343e4868c\") " pod="openshift-marketplace/redhat-marketplace-q2kgj" Feb 24 09:45:27 crc kubenswrapper[4822]: I0224 09:45:27.650174 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48d8c8d6-7a83-49d6-a79a-f97343e4868c-utilities\") pod \"redhat-marketplace-q2kgj\" (UID: \"48d8c8d6-7a83-49d6-a79a-f97343e4868c\") " pod="openshift-marketplace/redhat-marketplace-q2kgj" Feb 24 09:45:27 crc kubenswrapper[4822]: I0224 09:45:27.650392 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48d8c8d6-7a83-49d6-a79a-f97343e4868c-catalog-content\") pod \"redhat-marketplace-q2kgj\" (UID: \"48d8c8d6-7a83-49d6-a79a-f97343e4868c\") " pod="openshift-marketplace/redhat-marketplace-q2kgj" Feb 24 09:45:27 crc kubenswrapper[4822]: I0224 09:45:27.681586 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-btfsd"] Feb 24 09:45:27 crc kubenswrapper[4822]: I0224 09:45:27.683048 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-btfsd" Feb 24 09:45:27 crc kubenswrapper[4822]: I0224 09:45:27.685186 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgn2q\" (UniqueName: \"kubernetes.io/projected/48d8c8d6-7a83-49d6-a79a-f97343e4868c-kube-api-access-lgn2q\") pod \"redhat-marketplace-q2kgj\" (UID: \"48d8c8d6-7a83-49d6-a79a-f97343e4868c\") " pod="openshift-marketplace/redhat-marketplace-q2kgj" Feb 24 09:45:27 crc kubenswrapper[4822]: I0224 09:45:27.691574 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-btfsd"] Feb 24 09:45:27 crc kubenswrapper[4822]: I0224 09:45:27.751236 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15c04227-80f0-4e0a-a727-15a988cd0760-utilities\") pod \"certified-operators-btfsd\" (UID: \"15c04227-80f0-4e0a-a727-15a988cd0760\") " pod="openshift-marketplace/certified-operators-btfsd" Feb 24 09:45:27 crc kubenswrapper[4822]: I0224 09:45:27.751289 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8x6f\" (UniqueName: \"kubernetes.io/projected/15c04227-80f0-4e0a-a727-15a988cd0760-kube-api-access-v8x6f\") pod \"certified-operators-btfsd\" (UID: \"15c04227-80f0-4e0a-a727-15a988cd0760\") " pod="openshift-marketplace/certified-operators-btfsd" Feb 24 09:45:27 crc kubenswrapper[4822]: I0224 09:45:27.751604 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15c04227-80f0-4e0a-a727-15a988cd0760-catalog-content\") pod \"certified-operators-btfsd\" (UID: \"15c04227-80f0-4e0a-a727-15a988cd0760\") " pod="openshift-marketplace/certified-operators-btfsd" Feb 24 09:45:27 crc kubenswrapper[4822]: I0224 09:45:27.811232 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q2kgj" Feb 24 09:45:27 crc kubenswrapper[4822]: I0224 09:45:27.853550 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15c04227-80f0-4e0a-a727-15a988cd0760-catalog-content\") pod \"certified-operators-btfsd\" (UID: \"15c04227-80f0-4e0a-a727-15a988cd0760\") " pod="openshift-marketplace/certified-operators-btfsd" Feb 24 09:45:27 crc kubenswrapper[4822]: I0224 09:45:27.853955 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15c04227-80f0-4e0a-a727-15a988cd0760-utilities\") pod \"certified-operators-btfsd\" (UID: \"15c04227-80f0-4e0a-a727-15a988cd0760\") " pod="openshift-marketplace/certified-operators-btfsd" Feb 24 09:45:27 crc kubenswrapper[4822]: I0224 09:45:27.853981 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8x6f\" (UniqueName: \"kubernetes.io/projected/15c04227-80f0-4e0a-a727-15a988cd0760-kube-api-access-v8x6f\") pod \"certified-operators-btfsd\" (UID: \"15c04227-80f0-4e0a-a727-15a988cd0760\") " pod="openshift-marketplace/certified-operators-btfsd" Feb 24 09:45:27 crc kubenswrapper[4822]: I0224 09:45:27.854141 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15c04227-80f0-4e0a-a727-15a988cd0760-catalog-content\") pod \"certified-operators-btfsd\" (UID: \"15c04227-80f0-4e0a-a727-15a988cd0760\") " pod="openshift-marketplace/certified-operators-btfsd" Feb 24 09:45:27 crc kubenswrapper[4822]: I0224 09:45:27.854386 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15c04227-80f0-4e0a-a727-15a988cd0760-utilities\") pod \"certified-operators-btfsd\" (UID: \"15c04227-80f0-4e0a-a727-15a988cd0760\") " pod="openshift-marketplace/certified-operators-btfsd" Feb 24 09:45:27 crc kubenswrapper[4822]: I0224 09:45:27.882858 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8x6f\" (UniqueName: \"kubernetes.io/projected/15c04227-80f0-4e0a-a727-15a988cd0760-kube-api-access-v8x6f\") pod \"certified-operators-btfsd\" (UID: \"15c04227-80f0-4e0a-a727-15a988cd0760\") " pod="openshift-marketplace/certified-operators-btfsd" Feb 24 09:45:28 crc kubenswrapper[4822]: I0224 09:45:28.034764 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-btfsd" Feb 24 09:45:28 crc kubenswrapper[4822]: I0224 09:45:28.135552 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q2kgj"] Feb 24 09:45:28 crc kubenswrapper[4822]: W0224 09:45:28.152138 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48d8c8d6_7a83_49d6_a79a_f97343e4868c.slice/crio-3064f124267b1fcdf589f0315fbf837981baa8d17fb1e5a200806bd61a6a7a68 WatchSource:0}: Error finding container 3064f124267b1fcdf589f0315fbf837981baa8d17fb1e5a200806bd61a6a7a68: Status 404 returned error can't find the container with id 3064f124267b1fcdf589f0315fbf837981baa8d17fb1e5a200806bd61a6a7a68 Feb 24 09:45:28 crc kubenswrapper[4822]: I0224 09:45:28.591766 4822 generic.go:334] "Generic (PLEG): container finished" podID="48d8c8d6-7a83-49d6-a79a-f97343e4868c" containerID="ce260bc26be9032a41b0cec9a94b00577205328fceef8d2de3d12481a5478a0c" exitCode=0 Feb 24 09:45:28 crc kubenswrapper[4822]: I0224 09:45:28.591983 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q2kgj" event={"ID":"48d8c8d6-7a83-49d6-a79a-f97343e4868c","Type":"ContainerDied","Data":"ce260bc26be9032a41b0cec9a94b00577205328fceef8d2de3d12481a5478a0c"} Feb 24 09:45:28 crc kubenswrapper[4822]: I0224 09:45:28.592066 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q2kgj" event={"ID":"48d8c8d6-7a83-49d6-a79a-f97343e4868c","Type":"ContainerStarted","Data":"3064f124267b1fcdf589f0315fbf837981baa8d17fb1e5a200806bd61a6a7a68"} Feb 24 09:45:28 crc kubenswrapper[4822]: I0224 09:45:28.594583 4822 generic.go:334] "Generic (PLEG): container finished" podID="a7bd1955-1e32-4114-85bc-21c1fc43ac95" containerID="66d20189f74f95b9ebf93e72f204724d3319b831418c125103def335817468f2" exitCode=0 Feb 24 09:45:28 crc kubenswrapper[4822]: I0224 09:45:28.594617 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-spvr6" event={"ID":"a7bd1955-1e32-4114-85bc-21c1fc43ac95","Type":"ContainerDied","Data":"66d20189f74f95b9ebf93e72f204724d3319b831418c125103def335817468f2"} Feb 24 09:45:28 crc kubenswrapper[4822]: I0224 09:45:28.601948 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-btfsd"] Feb 24 09:45:29 crc kubenswrapper[4822]: I0224 09:45:29.601558 4822 generic.go:334] "Generic (PLEG): container finished" podID="15c04227-80f0-4e0a-a727-15a988cd0760" containerID="7e8743a6e5d8f85d9276e9cfed1ff808d3a1028e5482baf0a0ad86c2ba8dd6d0" exitCode=0 Feb 24 09:45:29 crc kubenswrapper[4822]: I0224 09:45:29.601610 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-btfsd" event={"ID":"15c04227-80f0-4e0a-a727-15a988cd0760","Type":"ContainerDied","Data":"7e8743a6e5d8f85d9276e9cfed1ff808d3a1028e5482baf0a0ad86c2ba8dd6d0"} Feb 24 09:45:29 crc kubenswrapper[4822]: I0224 09:45:29.602176 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-btfsd" event={"ID":"15c04227-80f0-4e0a-a727-15a988cd0760","Type":"ContainerStarted","Data":"757aefa1a7809e721fa39d8d5c0a58925c12e5af42bdf883cb0ea96dae9ea06e"} Feb 24 09:45:29 crc kubenswrapper[4822]: I0224 09:45:29.605059 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q2kgj" event={"ID":"48d8c8d6-7a83-49d6-a79a-f97343e4868c","Type":"ContainerStarted","Data":"e4149ba21ba7093d94166179136ce1ac92e1dcbfcc0254d701e595c166181407"} Feb 24 09:45:29 crc kubenswrapper[4822]: I0224 09:45:29.614564 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-spvr6" event={"ID":"a7bd1955-1e32-4114-85bc-21c1fc43ac95","Type":"ContainerStarted","Data":"e69744365d61e73a7252da6ca912a0a913148695ef58128f0ceaab1afcd2cc35"} Feb 24 09:45:29 crc kubenswrapper[4822]: I0224 09:45:29.650945 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-spvr6" podStartSLOduration=2.19068976 podStartE2EDuration="4.650901882s" podCreationTimestamp="2026-02-24 09:45:25 +0000 UTC" firstStartedPulling="2026-02-24 09:45:26.574184982 +0000 UTC m=+2248.961947530" lastFinishedPulling="2026-02-24 09:45:29.034397104 +0000 UTC m=+2251.422159652" observedRunningTime="2026-02-24 09:45:29.64462934 +0000 UTC m=+2252.032391888" watchObservedRunningTime="2026-02-24 09:45:29.650901882 +0000 UTC m=+2252.038664430" Feb 24 09:45:30 crc kubenswrapper[4822]: I0224 09:45:30.444355 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33112: no serving certificate available for the kubelet" Feb 24 09:45:30 crc kubenswrapper[4822]: I0224 09:45:30.492384 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33128: no serving certificate available for the kubelet" Feb 24 09:45:30 crc kubenswrapper[4822]: I0224 09:45:30.626420 4822 generic.go:334] "Generic (PLEG): container finished" podID="48d8c8d6-7a83-49d6-a79a-f97343e4868c" containerID="e4149ba21ba7093d94166179136ce1ac92e1dcbfcc0254d701e595c166181407" exitCode=0 Feb 24 09:45:30 crc kubenswrapper[4822]: I0224 09:45:30.626519 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q2kgj" event={"ID":"48d8c8d6-7a83-49d6-a79a-f97343e4868c","Type":"ContainerDied","Data":"e4149ba21ba7093d94166179136ce1ac92e1dcbfcc0254d701e595c166181407"} Feb 24 09:45:31 crc kubenswrapper[4822]: I0224 09:45:31.644787 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q2kgj" event={"ID":"48d8c8d6-7a83-49d6-a79a-f97343e4868c","Type":"ContainerStarted","Data":"7ab358aa3b00d2fed34add589de26e73cbadf2d6ce96dbfb42487a084566bb84"} Feb 24 09:45:31 crc kubenswrapper[4822]: I0224 09:45:31.670346 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q2kgj" podStartSLOduration=2.226609525 podStartE2EDuration="4.670330607s" podCreationTimestamp="2026-02-24 09:45:27 +0000 UTC" firstStartedPulling="2026-02-24 09:45:28.593328141 +0000 UTC m=+2250.981090689" lastFinishedPulling="2026-02-24 09:45:31.037049203 +0000 UTC m=+2253.424811771" observedRunningTime="2026-02-24 09:45:31.667712337 +0000 UTC m=+2254.055474885" watchObservedRunningTime="2026-02-24 09:45:31.670330607 +0000 UTC m=+2254.058093155" Feb 24 09:45:33 crc kubenswrapper[4822]: I0224 09:45:33.498101 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46314: no serving certificate available for the kubelet" Feb 24 09:45:33 crc kubenswrapper[4822]: I0224 09:45:33.539682 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46318: no serving certificate available for the kubelet" Feb 24 09:45:35 crc kubenswrapper[4822]: I0224 09:45:35.437689 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-spvr6" Feb 24 09:45:35 crc kubenswrapper[4822]: I0224 09:45:35.438268 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-spvr6" Feb 24 09:45:35 crc kubenswrapper[4822]: I0224 09:45:35.518343 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-spvr6" Feb 24 09:45:35 crc kubenswrapper[4822]: I0224 09:45:35.777558 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-spvr6" Feb 24 09:45:36 crc kubenswrapper[4822]: I0224 09:45:36.551023 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46330: no serving certificate available for the kubelet" Feb 24 09:45:36 crc kubenswrapper[4822]: I0224 09:45:36.606149 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46340: no serving certificate available for the kubelet" Feb 24 09:45:36 crc kubenswrapper[4822]: I0224 09:45:36.697898 4822 generic.go:334] "Generic (PLEG): container finished" podID="15c04227-80f0-4e0a-a727-15a988cd0760" containerID="64252c1608453bfc305391ff93fdbfb26b9ba8c2f787ac3cffb0f9194ad3d0ac" exitCode=0 Feb 24 09:45:36 crc kubenswrapper[4822]: I0224 09:45:36.699347 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-btfsd" event={"ID":"15c04227-80f0-4e0a-a727-15a988cd0760","Type":"ContainerDied","Data":"64252c1608453bfc305391ff93fdbfb26b9ba8c2f787ac3cffb0f9194ad3d0ac"} Feb 24 09:45:36 crc kubenswrapper[4822]: I0224 09:45:36.891354 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-spvr6"] Feb 24 09:45:37 crc kubenswrapper[4822]: I0224 09:45:37.737582 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-spvr6" podUID="a7bd1955-1e32-4114-85bc-21c1fc43ac95" containerName="registry-server" containerID="cri-o://e69744365d61e73a7252da6ca912a0a913148695ef58128f0ceaab1afcd2cc35" gracePeriod=2 Feb 24 09:45:37 crc kubenswrapper[4822]: I0224 09:45:37.739374 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-btfsd" event={"ID":"15c04227-80f0-4e0a-a727-15a988cd0760","Type":"ContainerStarted","Data":"9546adc2483babff18e913e09b026e0a7c8d085cd88c7e35f6492866c4b9aa82"} Feb 24 09:45:37 crc kubenswrapper[4822]: I0224 09:45:37.811901 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q2kgj" Feb 24 09:45:37 crc kubenswrapper[4822]: I0224 09:45:37.811981 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q2kgj" Feb 24 09:45:37 crc kubenswrapper[4822]: I0224 09:45:37.821161 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-btfsd" podStartSLOduration=3.21203435 podStartE2EDuration="10.821141014s" podCreationTimestamp="2026-02-24 09:45:27 +0000 UTC" firstStartedPulling="2026-02-24 09:45:29.602780751 +0000 UTC m=+2251.990543299" lastFinishedPulling="2026-02-24 09:45:37.211887375 +0000 UTC m=+2259.599649963" observedRunningTime="2026-02-24 09:45:37.817420493 +0000 UTC m=+2260.205183051" watchObservedRunningTime="2026-02-24 09:45:37.821141014 +0000 UTC m=+2260.208903572" Feb 24 09:45:37 crc kubenswrapper[4822]: I0224 09:45:37.877571 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q2kgj" Feb 24 09:45:38 crc kubenswrapper[4822]: I0224 09:45:38.037205 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-btfsd" Feb 24 09:45:38 crc kubenswrapper[4822]: I0224 09:45:38.037254 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-btfsd" Feb 24 09:45:38 crc kubenswrapper[4822]: I0224 09:45:38.249406 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-spvr6" Feb 24 09:45:38 crc kubenswrapper[4822]: I0224 09:45:38.336477 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7bd1955-1e32-4114-85bc-21c1fc43ac95-utilities\") pod \"a7bd1955-1e32-4114-85bc-21c1fc43ac95\" (UID: \"a7bd1955-1e32-4114-85bc-21c1fc43ac95\") " Feb 24 09:45:38 crc kubenswrapper[4822]: I0224 09:45:38.336659 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7bd1955-1e32-4114-85bc-21c1fc43ac95-catalog-content\") pod \"a7bd1955-1e32-4114-85bc-21c1fc43ac95\" (UID: \"a7bd1955-1e32-4114-85bc-21c1fc43ac95\") " Feb 24 09:45:38 crc kubenswrapper[4822]: I0224 09:45:38.336690 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfv7l\" (UniqueName: \"kubernetes.io/projected/a7bd1955-1e32-4114-85bc-21c1fc43ac95-kube-api-access-mfv7l\") pod \"a7bd1955-1e32-4114-85bc-21c1fc43ac95\" (UID: \"a7bd1955-1e32-4114-85bc-21c1fc43ac95\") " Feb 24 09:45:38 crc kubenswrapper[4822]: I0224 09:45:38.337534 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7bd1955-1e32-4114-85bc-21c1fc43ac95-utilities" (OuterVolumeSpecName: "utilities") pod "a7bd1955-1e32-4114-85bc-21c1fc43ac95" (UID: "a7bd1955-1e32-4114-85bc-21c1fc43ac95"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:45:38 crc kubenswrapper[4822]: I0224 09:45:38.342170 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7bd1955-1e32-4114-85bc-21c1fc43ac95-kube-api-access-mfv7l" (OuterVolumeSpecName: "kube-api-access-mfv7l") pod "a7bd1955-1e32-4114-85bc-21c1fc43ac95" (UID: "a7bd1955-1e32-4114-85bc-21c1fc43ac95"). InnerVolumeSpecName "kube-api-access-mfv7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:45:38 crc kubenswrapper[4822]: I0224 09:45:38.389292 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7bd1955-1e32-4114-85bc-21c1fc43ac95-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7bd1955-1e32-4114-85bc-21c1fc43ac95" (UID: "a7bd1955-1e32-4114-85bc-21c1fc43ac95"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:45:38 crc kubenswrapper[4822]: I0224 09:45:38.438971 4822 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7bd1955-1e32-4114-85bc-21c1fc43ac95-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:45:38 crc kubenswrapper[4822]: I0224 09:45:38.439028 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfv7l\" (UniqueName: \"kubernetes.io/projected/a7bd1955-1e32-4114-85bc-21c1fc43ac95-kube-api-access-mfv7l\") on node \"crc\" DevicePath \"\"" Feb 24 09:45:38 crc kubenswrapper[4822]: I0224 09:45:38.439045 4822 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7bd1955-1e32-4114-85bc-21c1fc43ac95-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:45:38 crc kubenswrapper[4822]: I0224 09:45:38.759438 4822 generic.go:334] "Generic (PLEG): container finished" podID="a7bd1955-1e32-4114-85bc-21c1fc43ac95" containerID="e69744365d61e73a7252da6ca912a0a913148695ef58128f0ceaab1afcd2cc35" exitCode=0 Feb 24 09:45:38 crc kubenswrapper[4822]: I0224 09:45:38.759555 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-spvr6" event={"ID":"a7bd1955-1e32-4114-85bc-21c1fc43ac95","Type":"ContainerDied","Data":"e69744365d61e73a7252da6ca912a0a913148695ef58128f0ceaab1afcd2cc35"} Feb 24 09:45:38 crc kubenswrapper[4822]: I0224 09:45:38.759580 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-spvr6" Feb 24 09:45:38 crc kubenswrapper[4822]: I0224 09:45:38.759619 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-spvr6" event={"ID":"a7bd1955-1e32-4114-85bc-21c1fc43ac95","Type":"ContainerDied","Data":"1c5fa42e5c9ecf8c9c45fa1c19e12af343fcb8f7015d47315f6b9c7cc44daf17"} Feb 24 09:45:38 crc kubenswrapper[4822]: I0224 09:45:38.759652 4822 scope.go:117] "RemoveContainer" containerID="e69744365d61e73a7252da6ca912a0a913148695ef58128f0ceaab1afcd2cc35" Feb 24 09:45:38 crc kubenswrapper[4822]: I0224 09:45:38.797952 4822 scope.go:117] "RemoveContainer" containerID="66d20189f74f95b9ebf93e72f204724d3319b831418c125103def335817468f2" Feb 24 09:45:38 crc kubenswrapper[4822]: I0224 09:45:38.813405 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-spvr6"] Feb 24 09:45:38 crc kubenswrapper[4822]: I0224 09:45:38.820945 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-spvr6"] Feb 24 09:45:38 crc kubenswrapper[4822]: I0224 09:45:38.824417 4822 scope.go:117] "RemoveContainer" containerID="51c07f0ab0d5d1ec523102609fa82b05b48964d5aec9698d3b3b727782060182" Feb 24 09:45:38 crc kubenswrapper[4822]: I0224 09:45:38.849155 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q2kgj" Feb 24 09:45:38 crc kubenswrapper[4822]: I0224 09:45:38.859264 4822 scope.go:117] "RemoveContainer" containerID="e69744365d61e73a7252da6ca912a0a913148695ef58128f0ceaab1afcd2cc35" Feb 24 09:45:38 crc kubenswrapper[4822]: E0224 09:45:38.859835 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e69744365d61e73a7252da6ca912a0a913148695ef58128f0ceaab1afcd2cc35\": container with ID starting with e69744365d61e73a7252da6ca912a0a913148695ef58128f0ceaab1afcd2cc35 not found: ID does not exist" containerID="e69744365d61e73a7252da6ca912a0a913148695ef58128f0ceaab1afcd2cc35" Feb 24 09:45:38 crc kubenswrapper[4822]: I0224 09:45:38.859864 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e69744365d61e73a7252da6ca912a0a913148695ef58128f0ceaab1afcd2cc35"} err="failed to get container status \"e69744365d61e73a7252da6ca912a0a913148695ef58128f0ceaab1afcd2cc35\": rpc error: code = NotFound desc = could not find container \"e69744365d61e73a7252da6ca912a0a913148695ef58128f0ceaab1afcd2cc35\": container with ID starting with e69744365d61e73a7252da6ca912a0a913148695ef58128f0ceaab1afcd2cc35 not found: ID does not exist" Feb 24 09:45:38 crc kubenswrapper[4822]: I0224 09:45:38.859884 4822 scope.go:117] "RemoveContainer" containerID="66d20189f74f95b9ebf93e72f204724d3319b831418c125103def335817468f2" Feb 24 09:45:38 crc kubenswrapper[4822]: E0224 09:45:38.860368 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66d20189f74f95b9ebf93e72f204724d3319b831418c125103def335817468f2\": container with ID starting with 66d20189f74f95b9ebf93e72f204724d3319b831418c125103def335817468f2 not found: ID does not exist" containerID="66d20189f74f95b9ebf93e72f204724d3319b831418c125103def335817468f2" Feb 24 09:45:38 crc kubenswrapper[4822]: I0224 09:45:38.860387 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66d20189f74f95b9ebf93e72f204724d3319b831418c125103def335817468f2"} err="failed to get container status \"66d20189f74f95b9ebf93e72f204724d3319b831418c125103def335817468f2\": rpc error: code = NotFound desc = could not find container \"66d20189f74f95b9ebf93e72f204724d3319b831418c125103def335817468f2\": container with ID starting with 66d20189f74f95b9ebf93e72f204724d3319b831418c125103def335817468f2 not found: ID does not exist" Feb 24 09:45:38 crc kubenswrapper[4822]: I0224 09:45:38.860400 4822 scope.go:117] "RemoveContainer" containerID="51c07f0ab0d5d1ec523102609fa82b05b48964d5aec9698d3b3b727782060182" Feb 24 09:45:38 crc kubenswrapper[4822]: E0224 09:45:38.860746 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51c07f0ab0d5d1ec523102609fa82b05b48964d5aec9698d3b3b727782060182\": container with ID starting with 51c07f0ab0d5d1ec523102609fa82b05b48964d5aec9698d3b3b727782060182 not found: ID does not exist" containerID="51c07f0ab0d5d1ec523102609fa82b05b48964d5aec9698d3b3b727782060182" Feb 24 09:45:38 crc kubenswrapper[4822]: I0224 09:45:38.860764 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51c07f0ab0d5d1ec523102609fa82b05b48964d5aec9698d3b3b727782060182"} err="failed to get container status \"51c07f0ab0d5d1ec523102609fa82b05b48964d5aec9698d3b3b727782060182\": rpc error: code = NotFound desc = could not find container \"51c07f0ab0d5d1ec523102609fa82b05b48964d5aec9698d3b3b727782060182\": container with ID starting with 51c07f0ab0d5d1ec523102609fa82b05b48964d5aec9698d3b3b727782060182 not found: ID does not exist" Feb 24 09:45:39 crc kubenswrapper[4822]: I0224 09:45:39.091394 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-btfsd" podUID="15c04227-80f0-4e0a-a727-15a988cd0760" containerName="registry-server" probeResult="failure" output=< Feb 24 09:45:39 crc kubenswrapper[4822]: timeout: failed to connect service ":50051" within 1s Feb 24 09:45:39 crc kubenswrapper[4822]: > Feb 24 09:45:39 crc kubenswrapper[4822]: I0224 09:45:39.592233 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46354: no serving certificate available for the kubelet" Feb 24 09:45:39 crc kubenswrapper[4822]: I0224 09:45:39.649481 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46358: no serving certificate available for the kubelet" Feb 24 09:45:40 crc kubenswrapper[4822]: I0224 09:45:40.348213 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7bd1955-1e32-4114-85bc-21c1fc43ac95" path="/var/lib/kubelet/pods/a7bd1955-1e32-4114-85bc-21c1fc43ac95/volumes" Feb 24 09:45:41 crc kubenswrapper[4822]: I0224 09:45:41.286390 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q2kgj"] Feb 24 09:45:41 crc kubenswrapper[4822]: I0224 09:45:41.286787 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q2kgj" podUID="48d8c8d6-7a83-49d6-a79a-f97343e4868c" containerName="registry-server" containerID="cri-o://7ab358aa3b00d2fed34add589de26e73cbadf2d6ce96dbfb42487a084566bb84" gracePeriod=2 Feb 24 09:45:41 crc kubenswrapper[4822]: I0224 09:45:41.792442 4822 generic.go:334] "Generic (PLEG): container finished" podID="48d8c8d6-7a83-49d6-a79a-f97343e4868c" containerID="7ab358aa3b00d2fed34add589de26e73cbadf2d6ce96dbfb42487a084566bb84" exitCode=0 Feb 24 09:45:41 crc kubenswrapper[4822]: I0224 09:45:41.792501 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q2kgj" event={"ID":"48d8c8d6-7a83-49d6-a79a-f97343e4868c","Type":"ContainerDied","Data":"7ab358aa3b00d2fed34add589de26e73cbadf2d6ce96dbfb42487a084566bb84"} Feb 24 09:45:42 crc kubenswrapper[4822]: I0224 09:45:42.368691 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q2kgj" Feb 24 09:45:42 crc kubenswrapper[4822]: I0224 09:45:42.541259 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48d8c8d6-7a83-49d6-a79a-f97343e4868c-catalog-content\") pod \"48d8c8d6-7a83-49d6-a79a-f97343e4868c\" (UID: \"48d8c8d6-7a83-49d6-a79a-f97343e4868c\") " Feb 24 09:45:42 crc kubenswrapper[4822]: I0224 09:45:42.541312 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgn2q\" (UniqueName: \"kubernetes.io/projected/48d8c8d6-7a83-49d6-a79a-f97343e4868c-kube-api-access-lgn2q\") pod \"48d8c8d6-7a83-49d6-a79a-f97343e4868c\" (UID: \"48d8c8d6-7a83-49d6-a79a-f97343e4868c\") " Feb 24 09:45:42 crc kubenswrapper[4822]: I0224 09:45:42.541374 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48d8c8d6-7a83-49d6-a79a-f97343e4868c-utilities\") pod \"48d8c8d6-7a83-49d6-a79a-f97343e4868c\" (UID: \"48d8c8d6-7a83-49d6-a79a-f97343e4868c\") " Feb 24 09:45:42 crc kubenswrapper[4822]: I0224 09:45:42.542761 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48d8c8d6-7a83-49d6-a79a-f97343e4868c-utilities" (OuterVolumeSpecName: "utilities") pod "48d8c8d6-7a83-49d6-a79a-f97343e4868c" (UID: "48d8c8d6-7a83-49d6-a79a-f97343e4868c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:45:42 crc kubenswrapper[4822]: I0224 09:45:42.558290 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48d8c8d6-7a83-49d6-a79a-f97343e4868c-kube-api-access-lgn2q" (OuterVolumeSpecName: "kube-api-access-lgn2q") pod "48d8c8d6-7a83-49d6-a79a-f97343e4868c" (UID: "48d8c8d6-7a83-49d6-a79a-f97343e4868c"). InnerVolumeSpecName "kube-api-access-lgn2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:45:42 crc kubenswrapper[4822]: I0224 09:45:42.581361 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48d8c8d6-7a83-49d6-a79a-f97343e4868c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48d8c8d6-7a83-49d6-a79a-f97343e4868c" (UID: "48d8c8d6-7a83-49d6-a79a-f97343e4868c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:45:42 crc kubenswrapper[4822]: I0224 09:45:42.643404 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54598: no serving certificate available for the kubelet" Feb 24 09:45:42 crc kubenswrapper[4822]: I0224 09:45:42.643520 4822 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48d8c8d6-7a83-49d6-a79a-f97343e4868c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:45:42 crc kubenswrapper[4822]: I0224 09:45:42.643628 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgn2q\" (UniqueName: \"kubernetes.io/projected/48d8c8d6-7a83-49d6-a79a-f97343e4868c-kube-api-access-lgn2q\") on node \"crc\" DevicePath \"\"" Feb 24 09:45:42 crc kubenswrapper[4822]: I0224 09:45:42.643650 4822 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48d8c8d6-7a83-49d6-a79a-f97343e4868c-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:45:42 crc kubenswrapper[4822]: I0224 09:45:42.698298 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54604: no serving certificate available for the kubelet" Feb 24 09:45:42 crc kubenswrapper[4822]: I0224 09:45:42.804902 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q2kgj" event={"ID":"48d8c8d6-7a83-49d6-a79a-f97343e4868c","Type":"ContainerDied","Data":"3064f124267b1fcdf589f0315fbf837981baa8d17fb1e5a200806bd61a6a7a68"} Feb 24 09:45:42 crc kubenswrapper[4822]: I0224 09:45:42.804974 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q2kgj" Feb 24 09:45:42 crc kubenswrapper[4822]: I0224 09:45:42.805020 4822 scope.go:117] "RemoveContainer" containerID="7ab358aa3b00d2fed34add589de26e73cbadf2d6ce96dbfb42487a084566bb84" Feb 24 09:45:42 crc kubenswrapper[4822]: I0224 09:45:42.826640 4822 scope.go:117] "RemoveContainer" containerID="e4149ba21ba7093d94166179136ce1ac92e1dcbfcc0254d701e595c166181407" Feb 24 09:45:42 crc kubenswrapper[4822]: I0224 09:45:42.847758 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q2kgj"] Feb 24 09:45:42 crc kubenswrapper[4822]: I0224 09:45:42.849822 4822 scope.go:117] "RemoveContainer" containerID="ce260bc26be9032a41b0cec9a94b00577205328fceef8d2de3d12481a5478a0c" Feb 24 09:45:42 crc kubenswrapper[4822]: I0224 09:45:42.859808 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q2kgj"] Feb 24 09:45:44 crc kubenswrapper[4822]: I0224 09:45:44.059648 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/openstack-cell1-galera-0" podUID="3ff049ae-9abb-4477-9f51-eee7228cedfd" containerName="galera" probeResult="failure" output=< Feb 24 09:45:44 crc kubenswrapper[4822]: waiting for gcomm URI Feb 24 09:45:44 crc kubenswrapper[4822]: > Feb 24 09:45:44 crc kubenswrapper[4822]: I0224 09:45:44.059765 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 24 09:45:44 crc kubenswrapper[4822]: I0224 09:45:44.060784 4822 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"35907b3d39bb9c85bb0ac876638478962e23a772b0887ea6e2d711c6c90b5ad8"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed startup probe, will be restarted" Feb 24 09:45:44 crc kubenswrapper[4822]: I0224 09:45:44.131714 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="3ff049ae-9abb-4477-9f51-eee7228cedfd" containerName="galera" containerID="cri-o://35907b3d39bb9c85bb0ac876638478962e23a772b0887ea6e2d711c6c90b5ad8" gracePeriod=30 Feb 24 09:45:44 crc kubenswrapper[4822]: I0224 09:45:44.356399 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48d8c8d6-7a83-49d6-a79a-f97343e4868c" path="/var/lib/kubelet/pods/48d8c8d6-7a83-49d6-a79a-f97343e4868c/volumes" Feb 24 09:45:44 crc kubenswrapper[4822]: I0224 09:45:44.825271 4822 generic.go:334] "Generic (PLEG): container finished" podID="3ff049ae-9abb-4477-9f51-eee7228cedfd" containerID="35907b3d39bb9c85bb0ac876638478962e23a772b0887ea6e2d711c6c90b5ad8" exitCode=143 Feb 24 09:45:44 crc kubenswrapper[4822]: I0224 09:45:44.825325 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3ff049ae-9abb-4477-9f51-eee7228cedfd","Type":"ContainerDied","Data":"35907b3d39bb9c85bb0ac876638478962e23a772b0887ea6e2d711c6c90b5ad8"} Feb 24 09:45:44 crc kubenswrapper[4822]: I0224 09:45:44.825354 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3ff049ae-9abb-4477-9f51-eee7228cedfd","Type":"ContainerStarted","Data":"74f3cbe618e9630145cc19ece8102172932f857f6fa57e95aa6b9ffa71795cd6"} Feb 24 09:45:44 crc kubenswrapper[4822]: I0224 09:45:44.825374 4822 scope.go:117] "RemoveContainer" containerID="d624bbd1784d52a17fcbd6935bb41e97b5dc3c9ad4decefba9947fb693a63d3f" Feb 24 09:45:45 crc kubenswrapper[4822]: I0224 09:45:45.676563 4822 patch_prober.go:28] interesting pod/machine-config-daemon-qd752 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:45:45 crc kubenswrapper[4822]: I0224 09:45:45.676652 4822 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:45:45 crc kubenswrapper[4822]: I0224 09:45:45.694508 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54616: no serving certificate available for the kubelet" Feb 24 09:45:45 crc kubenswrapper[4822]: I0224 09:45:45.754901 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54632: no serving certificate available for the kubelet" Feb 24 09:45:48 crc kubenswrapper[4822]: I0224 09:45:48.086883 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-btfsd" Feb 24 09:45:48 crc kubenswrapper[4822]: I0224 09:45:48.131541 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-btfsd" Feb 24 09:45:48 crc kubenswrapper[4822]: I0224 09:45:48.221378 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-btfsd"] Feb 24 09:45:48 crc kubenswrapper[4822]: I0224 09:45:48.329663 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-chshb"] Feb 24 09:45:48 crc kubenswrapper[4822]: I0224 09:45:48.330307 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-chshb" podUID="66ffa1e1-3d61-4458-a48b-5364bcce0b29" containerName="registry-server" containerID="cri-o://2ec7e5441acdec59eed9e5a46e3ae9d2b2a26b9140084ff3a09120373a48d1bc" gracePeriod=2 Feb 24 09:45:48 crc kubenswrapper[4822]: I0224 09:45:48.751979 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54640: no serving certificate available for the kubelet" Feb 24 09:45:48 crc kubenswrapper[4822]: I0224 09:45:48.792436 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54650: no serving certificate available for the kubelet" Feb 24 09:45:48 crc kubenswrapper[4822]: I0224 09:45:48.804839 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-chshb" Feb 24 09:45:48 crc kubenswrapper[4822]: I0224 09:45:48.865621 4822 generic.go:334] "Generic (PLEG): container finished" podID="66ffa1e1-3d61-4458-a48b-5364bcce0b29" containerID="2ec7e5441acdec59eed9e5a46e3ae9d2b2a26b9140084ff3a09120373a48d1bc" exitCode=0 Feb 24 09:45:48 crc kubenswrapper[4822]: I0224 09:45:48.865687 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chshb" event={"ID":"66ffa1e1-3d61-4458-a48b-5364bcce0b29","Type":"ContainerDied","Data":"2ec7e5441acdec59eed9e5a46e3ae9d2b2a26b9140084ff3a09120373a48d1bc"} Feb 24 09:45:48 crc kubenswrapper[4822]: I0224 09:45:48.865759 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chshb" event={"ID":"66ffa1e1-3d61-4458-a48b-5364bcce0b29","Type":"ContainerDied","Data":"9102da19bd544a8cc41b9155bcfefad88210f1ef433061426d47465162af7e7c"} Feb 24 09:45:48 crc kubenswrapper[4822]: I0224 09:45:48.865808 4822 scope.go:117] "RemoveContainer" containerID="2ec7e5441acdec59eed9e5a46e3ae9d2b2a26b9140084ff3a09120373a48d1bc" Feb 24 09:45:48 crc kubenswrapper[4822]: I0224 09:45:48.865709 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-chshb" Feb 24 09:45:48 crc kubenswrapper[4822]: I0224 09:45:48.890141 4822 scope.go:117] "RemoveContainer" containerID="55158ac8f28f433602b6fded96d590b90935dda339cf75847693752eff5f2c17" Feb 24 09:45:48 crc kubenswrapper[4822]: I0224 09:45:48.964758 4822 scope.go:117] "RemoveContainer" containerID="eb975422bd31a207fef3238c881c96d668f2729b23bc32bfb955a0e68b3b420a" Feb 24 09:45:48 crc kubenswrapper[4822]: I0224 09:45:48.971793 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pb9h\" (UniqueName: \"kubernetes.io/projected/66ffa1e1-3d61-4458-a48b-5364bcce0b29-kube-api-access-6pb9h\") pod \"66ffa1e1-3d61-4458-a48b-5364bcce0b29\" (UID: \"66ffa1e1-3d61-4458-a48b-5364bcce0b29\") " Feb 24 09:45:48 crc kubenswrapper[4822]: I0224 09:45:48.971885 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66ffa1e1-3d61-4458-a48b-5364bcce0b29-catalog-content\") pod \"66ffa1e1-3d61-4458-a48b-5364bcce0b29\" (UID: \"66ffa1e1-3d61-4458-a48b-5364bcce0b29\") " Feb 24 09:45:48 crc kubenswrapper[4822]: I0224 09:45:48.971960 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66ffa1e1-3d61-4458-a48b-5364bcce0b29-utilities\") pod \"66ffa1e1-3d61-4458-a48b-5364bcce0b29\" (UID: \"66ffa1e1-3d61-4458-a48b-5364bcce0b29\") " Feb 24 09:45:48 crc kubenswrapper[4822]: I0224 09:45:48.973859 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66ffa1e1-3d61-4458-a48b-5364bcce0b29-utilities" (OuterVolumeSpecName: "utilities") pod "66ffa1e1-3d61-4458-a48b-5364bcce0b29" (UID: "66ffa1e1-3d61-4458-a48b-5364bcce0b29"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:45:48 crc kubenswrapper[4822]: I0224 09:45:48.995115 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66ffa1e1-3d61-4458-a48b-5364bcce0b29-kube-api-access-6pb9h" (OuterVolumeSpecName: "kube-api-access-6pb9h") pod "66ffa1e1-3d61-4458-a48b-5364bcce0b29" (UID: "66ffa1e1-3d61-4458-a48b-5364bcce0b29"). InnerVolumeSpecName "kube-api-access-6pb9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:45:49 crc kubenswrapper[4822]: I0224 09:45:49.020046 4822 scope.go:117] "RemoveContainer" containerID="2ec7e5441acdec59eed9e5a46e3ae9d2b2a26b9140084ff3a09120373a48d1bc" Feb 24 09:45:49 crc kubenswrapper[4822]: E0224 09:45:49.020459 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ec7e5441acdec59eed9e5a46e3ae9d2b2a26b9140084ff3a09120373a48d1bc\": container with ID starting with 2ec7e5441acdec59eed9e5a46e3ae9d2b2a26b9140084ff3a09120373a48d1bc not found: ID does not exist" containerID="2ec7e5441acdec59eed9e5a46e3ae9d2b2a26b9140084ff3a09120373a48d1bc" Feb 24 09:45:49 crc kubenswrapper[4822]: I0224 09:45:49.020559 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ec7e5441acdec59eed9e5a46e3ae9d2b2a26b9140084ff3a09120373a48d1bc"} err="failed to get container status \"2ec7e5441acdec59eed9e5a46e3ae9d2b2a26b9140084ff3a09120373a48d1bc\": rpc error: code = NotFound desc = could not find container \"2ec7e5441acdec59eed9e5a46e3ae9d2b2a26b9140084ff3a09120373a48d1bc\": container with ID starting with 2ec7e5441acdec59eed9e5a46e3ae9d2b2a26b9140084ff3a09120373a48d1bc not found: ID does not exist" Feb 24 09:45:49 crc kubenswrapper[4822]: I0224 09:45:49.020662 4822 scope.go:117] "RemoveContainer" containerID="55158ac8f28f433602b6fded96d590b90935dda339cf75847693752eff5f2c17" Feb 24 09:45:49 crc kubenswrapper[4822]: E0224 09:45:49.021028 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55158ac8f28f433602b6fded96d590b90935dda339cf75847693752eff5f2c17\": container with ID starting with 55158ac8f28f433602b6fded96d590b90935dda339cf75847693752eff5f2c17 not found: ID does not exist" containerID="55158ac8f28f433602b6fded96d590b90935dda339cf75847693752eff5f2c17" Feb 24 09:45:49 crc kubenswrapper[4822]: I0224 09:45:49.021059 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55158ac8f28f433602b6fded96d590b90935dda339cf75847693752eff5f2c17"} err="failed to get container status \"55158ac8f28f433602b6fded96d590b90935dda339cf75847693752eff5f2c17\": rpc error: code = NotFound desc = could not find container \"55158ac8f28f433602b6fded96d590b90935dda339cf75847693752eff5f2c17\": container with ID starting with 55158ac8f28f433602b6fded96d590b90935dda339cf75847693752eff5f2c17 not found: ID does not exist" Feb 24 09:45:49 crc kubenswrapper[4822]: I0224 09:45:49.021080 4822 scope.go:117] "RemoveContainer" containerID="eb975422bd31a207fef3238c881c96d668f2729b23bc32bfb955a0e68b3b420a" Feb 24 09:45:49 crc kubenswrapper[4822]: E0224 09:45:49.021352 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb975422bd31a207fef3238c881c96d668f2729b23bc32bfb955a0e68b3b420a\": container with ID starting with eb975422bd31a207fef3238c881c96d668f2729b23bc32bfb955a0e68b3b420a not found: ID does not exist" containerID="eb975422bd31a207fef3238c881c96d668f2729b23bc32bfb955a0e68b3b420a" Feb 24 09:45:49 crc kubenswrapper[4822]: I0224 09:45:49.021393 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb975422bd31a207fef3238c881c96d668f2729b23bc32bfb955a0e68b3b420a"} err="failed to get container status \"eb975422bd31a207fef3238c881c96d668f2729b23bc32bfb955a0e68b3b420a\": rpc error: code = NotFound desc = could not find container \"eb975422bd31a207fef3238c881c96d668f2729b23bc32bfb955a0e68b3b420a\": container with ID starting with eb975422bd31a207fef3238c881c96d668f2729b23bc32bfb955a0e68b3b420a not found: ID does not exist" Feb 24 09:45:49 crc kubenswrapper[4822]: I0224 09:45:49.032033 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66ffa1e1-3d61-4458-a48b-5364bcce0b29-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66ffa1e1-3d61-4458-a48b-5364bcce0b29" (UID: "66ffa1e1-3d61-4458-a48b-5364bcce0b29"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:45:49 crc kubenswrapper[4822]: I0224 09:45:49.074082 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pb9h\" (UniqueName: \"kubernetes.io/projected/66ffa1e1-3d61-4458-a48b-5364bcce0b29-kube-api-access-6pb9h\") on node \"crc\" DevicePath \"\"" Feb 24 09:45:49 crc kubenswrapper[4822]: I0224 09:45:49.074129 4822 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66ffa1e1-3d61-4458-a48b-5364bcce0b29-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:45:49 crc kubenswrapper[4822]: I0224 09:45:49.074142 4822 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66ffa1e1-3d61-4458-a48b-5364bcce0b29-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:45:49 crc kubenswrapper[4822]: I0224 09:45:49.198488 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-chshb"] Feb 24 09:45:49 crc kubenswrapper[4822]: I0224 09:45:49.206573 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-chshb"] Feb 24 09:45:50 crc kubenswrapper[4822]: I0224 09:45:50.350008 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66ffa1e1-3d61-4458-a48b-5364bcce0b29" path="/var/lib/kubelet/pods/66ffa1e1-3d61-4458-a48b-5364bcce0b29/volumes" Feb 24 09:45:51 crc kubenswrapper[4822]: I0224 09:45:51.323993 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/openstack-galera-0" podUID="92cdd045-d60c-433d-b2e3-32f93299ee8e" containerName="galera" probeResult="failure" output=< Feb 24 09:45:51 crc kubenswrapper[4822]: waiting for gcomm URI Feb 24 09:45:51 crc kubenswrapper[4822]: > Feb 24 09:45:51 crc kubenswrapper[4822]: I0224 09:45:51.324103 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 24 09:45:51 crc kubenswrapper[4822]: I0224 09:45:51.325116 4822 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"6b69074a3b428cba81ebaa5ad1ff74cedfa8f1f70d4c7ccaa9e6620c049c5a01"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed startup probe, will be restarted" Feb 24 09:45:51 crc kubenswrapper[4822]: I0224 09:45:51.516772 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="92cdd045-d60c-433d-b2e3-32f93299ee8e" containerName="galera" containerID="cri-o://6b69074a3b428cba81ebaa5ad1ff74cedfa8f1f70d4c7ccaa9e6620c049c5a01" gracePeriod=30 Feb 24 09:45:51 crc kubenswrapper[4822]: I0224 09:45:51.799700 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39898: no serving certificate available for the kubelet" Feb 24 09:45:51 crc kubenswrapper[4822]: I0224 09:45:51.855789 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39912: no serving certificate available for the kubelet" Feb 24 09:45:51 crc kubenswrapper[4822]: I0224 09:45:51.892894 4822 generic.go:334] "Generic (PLEG): container finished" podID="92cdd045-d60c-433d-b2e3-32f93299ee8e" containerID="6b69074a3b428cba81ebaa5ad1ff74cedfa8f1f70d4c7ccaa9e6620c049c5a01" exitCode=143 Feb 24 09:45:51 crc kubenswrapper[4822]: I0224 09:45:51.892947 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"92cdd045-d60c-433d-b2e3-32f93299ee8e","Type":"ContainerDied","Data":"6b69074a3b428cba81ebaa5ad1ff74cedfa8f1f70d4c7ccaa9e6620c049c5a01"} Feb 24 09:45:51 crc kubenswrapper[4822]: I0224 09:45:51.893000 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"92cdd045-d60c-433d-b2e3-32f93299ee8e","Type":"ContainerStarted","Data":"a62ac07f03c3cf24264a4baf203c891cea64e1dc745db8ec0d97bb7e8a039dd1"} Feb 24 09:45:51 crc kubenswrapper[4822]: I0224 09:45:51.893025 4822 scope.go:117] "RemoveContainer" containerID="cb5c3867effd9054c31685c8dda918a018f5ba3c49440c0a591d2370425bc55f" Feb 24 09:45:53 crc kubenswrapper[4822]: I0224 09:45:53.058062 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 24 09:45:53 crc kubenswrapper[4822]: I0224 09:45:53.058118 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 24 09:45:54 crc kubenswrapper[4822]: I0224 09:45:54.900316 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39928: no serving certificate available for the kubelet" Feb 24 09:45:54 crc kubenswrapper[4822]: I0224 09:45:54.951047 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39944: no serving certificate available for the kubelet" Feb 24 09:45:57 crc kubenswrapper[4822]: I0224 09:45:57.941644 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39948: no serving certificate available for the kubelet" Feb 24 09:45:58 crc kubenswrapper[4822]: I0224 09:45:58.001137 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39958: no serving certificate available for the kubelet" Feb 24 09:46:00 crc kubenswrapper[4822]: I0224 09:46:00.997791 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39974: no serving certificate available for the kubelet" Feb 24 09:46:01 crc kubenswrapper[4822]: I0224 09:46:01.057475 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39986: no serving certificate available for the kubelet" Feb 24 09:46:01 crc kubenswrapper[4822]: I0224 09:46:01.616749 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 24 09:46:01 crc kubenswrapper[4822]: I0224 09:46:01.616828 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 24 09:46:03 crc kubenswrapper[4822]: I0224 09:46:03.467303 4822 scope.go:117] "RemoveContainer" containerID="81b38047910613cbe7fe491f561bb0c1ab4d809c262dd448832ee4401c4b34c9" Feb 24 09:46:04 crc kubenswrapper[4822]: I0224 09:46:04.054955 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39900: no serving certificate available for the kubelet" Feb 24 09:46:04 crc kubenswrapper[4822]: I0224 09:46:04.110388 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39904: no serving certificate available for the kubelet" Feb 24 09:46:07 crc kubenswrapper[4822]: I0224 09:46:07.109077 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39916: no serving certificate available for the kubelet" Feb 24 09:46:07 crc kubenswrapper[4822]: I0224 09:46:07.165150 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39932: no serving certificate available for the kubelet" Feb 24 09:46:10 crc kubenswrapper[4822]: I0224 09:46:10.222207 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39940: no serving certificate available for the kubelet" Feb 24 09:46:10 crc kubenswrapper[4822]: I0224 09:46:10.277965 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39944: no serving certificate available for the kubelet" Feb 24 09:46:13 crc kubenswrapper[4822]: I0224 09:46:13.289211 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38126: no serving certificate available for the kubelet" Feb 24 09:46:13 crc kubenswrapper[4822]: I0224 09:46:13.346502 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38134: no serving certificate available for the kubelet" Feb 24 09:46:15 crc kubenswrapper[4822]: I0224 09:46:15.677008 4822 patch_prober.go:28] interesting pod/machine-config-daemon-qd752 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:46:15 crc kubenswrapper[4822]: I0224 09:46:15.677697 4822 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:46:16 crc kubenswrapper[4822]: I0224 09:46:16.356315 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38138: no serving certificate available for the kubelet" Feb 24 09:46:16 crc kubenswrapper[4822]: I0224 09:46:16.412376 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38150: no serving certificate available for the kubelet" Feb 24 09:46:17 crc kubenswrapper[4822]: I0224 09:46:17.102491 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38164: no serving certificate available for the kubelet" Feb 24 09:46:19 crc kubenswrapper[4822]: I0224 09:46:19.415561 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38168: no serving certificate available for the kubelet" Feb 24 09:46:19 crc kubenswrapper[4822]: I0224 09:46:19.470974 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38184: no serving certificate available for the kubelet" Feb 24 09:46:22 crc kubenswrapper[4822]: I0224 09:46:22.472517 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45040: no serving certificate available for the kubelet" Feb 24 09:46:22 crc kubenswrapper[4822]: I0224 09:46:22.538713 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45046: no serving certificate available for the kubelet" Feb 24 09:46:25 crc kubenswrapper[4822]: I0224 09:46:25.534217 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45060: no serving certificate available for the kubelet" Feb 24 09:46:25 crc kubenswrapper[4822]: I0224 09:46:25.589927 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45072: no serving certificate available for the kubelet" Feb 24 09:46:28 crc kubenswrapper[4822]: I0224 09:46:28.592601 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45084: no serving certificate available for the kubelet" Feb 24 09:46:28 crc kubenswrapper[4822]: I0224 09:46:28.655072 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45090: no serving certificate available for the kubelet" Feb 24 09:46:31 crc kubenswrapper[4822]: I0224 09:46:31.670364 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36760: no serving certificate available for the kubelet" Feb 24 09:46:31 crc kubenswrapper[4822]: I0224 09:46:31.751312 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36766: no serving certificate available for the kubelet" Feb 24 09:46:34 crc kubenswrapper[4822]: I0224 09:46:34.712001 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36782: no serving certificate available for the kubelet" Feb 24 09:46:34 crc kubenswrapper[4822]: I0224 09:46:34.790214 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36784: no serving certificate available for the kubelet" Feb 24 09:46:38 crc kubenswrapper[4822]: I0224 09:46:38.396990 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36800: no serving certificate available for the kubelet" Feb 24 09:46:38 crc kubenswrapper[4822]: I0224 09:46:38.439765 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36812: no serving certificate available for the kubelet" Feb 24 09:46:41 crc kubenswrapper[4822]: I0224 09:46:41.453850 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54342: no serving certificate available for the kubelet" Feb 24 09:46:41 crc kubenswrapper[4822]: I0224 09:46:41.512024 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54352: no serving certificate available for the kubelet" Feb 24 09:46:44 crc kubenswrapper[4822]: I0224 09:46:44.510791 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54360: no serving certificate available for the kubelet" Feb 24 09:46:44 crc kubenswrapper[4822]: I0224 09:46:44.570779 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54370: no serving certificate available for the kubelet" Feb 24 09:46:45 crc kubenswrapper[4822]: I0224 09:46:45.676291 4822 patch_prober.go:28] interesting pod/machine-config-daemon-qd752 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:46:45 crc kubenswrapper[4822]: I0224 09:46:45.676785 4822 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:46:45 crc kubenswrapper[4822]: I0224 09:46:45.676866 4822 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd752" Feb 24 09:46:45 crc kubenswrapper[4822]: I0224 09:46:45.678184 4822 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"73d09a66304a086c367c8577265ad701c4cbf69329dfde07a6ab8a962f8b647d"} pod="openshift-machine-config-operator/machine-config-daemon-qd752" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 09:46:45 crc kubenswrapper[4822]: I0224 09:46:45.678319 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" containerID="cri-o://73d09a66304a086c367c8577265ad701c4cbf69329dfde07a6ab8a962f8b647d" gracePeriod=600 Feb 24 09:46:45 crc kubenswrapper[4822]: E0224 09:46:45.805569 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:46:46 crc kubenswrapper[4822]: I0224 09:46:46.451444 4822 generic.go:334] "Generic (PLEG): container finished" podID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerID="73d09a66304a086c367c8577265ad701c4cbf69329dfde07a6ab8a962f8b647d" exitCode=0 Feb 24 09:46:46 crc kubenswrapper[4822]: I0224 09:46:46.451537 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" event={"ID":"306aba52-0b6e-4d3f-b05f-757daebc5e24","Type":"ContainerDied","Data":"73d09a66304a086c367c8577265ad701c4cbf69329dfde07a6ab8a962f8b647d"} Feb 24 09:46:46 crc kubenswrapper[4822]: I0224 09:46:46.451605 4822 scope.go:117] "RemoveContainer" containerID="a7f3dc489737dc9a0f77e32caabd7bee304c720801f3d539e75c66e8325fdbce" Feb 24 09:46:46 crc kubenswrapper[4822]: I0224 09:46:46.452570 4822 scope.go:117] "RemoveContainer" containerID="73d09a66304a086c367c8577265ad701c4cbf69329dfde07a6ab8a962f8b647d" Feb 24 09:46:46 crc kubenswrapper[4822]: E0224 09:46:46.453250 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:46:47 crc kubenswrapper[4822]: I0224 09:46:47.573991 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54382: no serving certificate available for the kubelet" Feb 24 09:46:47 crc kubenswrapper[4822]: I0224 09:46:47.636323 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54394: no serving certificate available for the kubelet" Feb 24 09:46:50 crc kubenswrapper[4822]: I0224 09:46:50.635454 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54410: no serving certificate available for the kubelet" Feb 24 09:46:50 crc kubenswrapper[4822]: I0224 09:46:50.724286 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54420: no serving certificate available for the kubelet" Feb 24 09:46:53 crc kubenswrapper[4822]: I0224 09:46:53.683358 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37810: no serving certificate available for the kubelet" Feb 24 09:46:53 crc kubenswrapper[4822]: I0224 09:46:53.786708 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37816: no serving certificate available for the kubelet" Feb 24 09:46:54 crc kubenswrapper[4822]: I0224 09:46:54.839860 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37830: no serving certificate available for the kubelet" Feb 24 09:46:56 crc kubenswrapper[4822]: I0224 09:46:56.737434 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37842: no serving certificate available for the kubelet" Feb 24 09:46:56 crc kubenswrapper[4822]: I0224 09:46:56.846213 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37846: no serving certificate available for the kubelet" Feb 24 09:46:58 crc kubenswrapper[4822]: I0224 09:46:58.349585 4822 scope.go:117] "RemoveContainer" containerID="73d09a66304a086c367c8577265ad701c4cbf69329dfde07a6ab8a962f8b647d" Feb 24 09:46:58 crc kubenswrapper[4822]: E0224 09:46:58.350260 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:46:59 crc kubenswrapper[4822]: I0224 09:46:59.787683 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37860: no serving certificate available for the kubelet" Feb 24 09:46:59 crc kubenswrapper[4822]: I0224 09:46:59.939580 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37876: no serving certificate available for the kubelet" Feb 24 09:47:02 crc kubenswrapper[4822]: I0224 09:47:02.845129 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46910: no serving certificate available for the kubelet" Feb 24 09:47:02 crc kubenswrapper[4822]: I0224 09:47:02.986338 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46924: no serving certificate available for the kubelet" Feb 24 09:47:05 crc kubenswrapper[4822]: I0224 09:47:05.906034 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46932: no serving certificate available for the kubelet" Feb 24 09:47:06 crc kubenswrapper[4822]: I0224 09:47:06.049746 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46948: no serving certificate available for the kubelet" Feb 24 09:47:08 crc kubenswrapper[4822]: I0224 09:47:08.957640 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46960: no serving certificate available for the kubelet" Feb 24 09:47:09 crc kubenswrapper[4822]: I0224 09:47:09.101128 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46962: no serving certificate available for the kubelet" Feb 24 09:47:09 crc kubenswrapper[4822]: I0224 09:47:09.338159 4822 scope.go:117] "RemoveContainer" containerID="73d09a66304a086c367c8577265ad701c4cbf69329dfde07a6ab8a962f8b647d" Feb 24 09:47:09 crc kubenswrapper[4822]: E0224 09:47:09.338749 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:47:12 crc kubenswrapper[4822]: I0224 09:47:12.014458 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45146: no serving certificate available for the kubelet" Feb 24 09:47:12 crc kubenswrapper[4822]: I0224 09:47:12.157873 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45158: no serving certificate available for the kubelet" Feb 24 09:47:15 crc kubenswrapper[4822]: I0224 09:47:15.074600 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45172: no serving certificate available for the kubelet" Feb 24 09:47:15 crc kubenswrapper[4822]: I0224 09:47:15.213219 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45186: no serving certificate available for the kubelet" Feb 24 09:47:18 crc kubenswrapper[4822]: I0224 09:47:18.136377 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45202: no serving certificate available for the kubelet" Feb 24 09:47:18 crc kubenswrapper[4822]: I0224 09:47:18.276308 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45204: no serving certificate available for the kubelet" Feb 24 09:47:20 crc kubenswrapper[4822]: I0224 09:47:20.337558 4822 scope.go:117] "RemoveContainer" containerID="73d09a66304a086c367c8577265ad701c4cbf69329dfde07a6ab8a962f8b647d" Feb 24 09:47:20 crc kubenswrapper[4822]: E0224 09:47:20.337879 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:47:21 crc kubenswrapper[4822]: I0224 09:47:21.206246 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37302: no serving certificate available for the kubelet" Feb 24 09:47:21 crc kubenswrapper[4822]: I0224 09:47:21.339088 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37308: no serving certificate available for the kubelet" Feb 24 09:47:23 crc kubenswrapper[4822]: I0224 09:47:23.021903 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37314: no serving certificate available for the kubelet" Feb 24 09:47:24 crc kubenswrapper[4822]: I0224 09:47:24.265814 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37322: no serving certificate available for the kubelet" Feb 24 09:47:24 crc kubenswrapper[4822]: I0224 09:47:24.403959 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37326: no serving certificate available for the kubelet" Feb 24 09:47:27 crc kubenswrapper[4822]: I0224 09:47:27.324888 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37340: no serving certificate available for the kubelet" Feb 24 09:47:27 crc kubenswrapper[4822]: I0224 09:47:27.467161 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37352: no serving certificate available for the kubelet" Feb 24 09:47:30 crc kubenswrapper[4822]: I0224 09:47:30.390893 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37360: no serving certificate available for the kubelet" Feb 24 09:47:30 crc kubenswrapper[4822]: I0224 09:47:30.525375 4822 ???:1] "http: TLS handshake error from 192.168.126.11:37372: no serving certificate available for the kubelet" Feb 24 09:47:32 crc kubenswrapper[4822]: I0224 09:47:32.302663 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43618: no serving certificate available for the kubelet" Feb 24 09:47:33 crc kubenswrapper[4822]: I0224 09:47:33.337468 4822 scope.go:117] "RemoveContainer" containerID="73d09a66304a086c367c8577265ad701c4cbf69329dfde07a6ab8a962f8b647d" Feb 24 09:47:33 crc kubenswrapper[4822]: E0224 09:47:33.338485 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:47:33 crc kubenswrapper[4822]: I0224 09:47:33.449509 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43624: no serving certificate available for the kubelet" Feb 24 09:47:33 crc kubenswrapper[4822]: I0224 09:47:33.584433 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43634: no serving certificate available for the kubelet" Feb 24 09:47:36 crc kubenswrapper[4822]: I0224 09:47:36.010338 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43642: no serving certificate available for the kubelet" Feb 24 09:47:36 crc kubenswrapper[4822]: I0224 09:47:36.509165 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43652: no serving certificate available for the kubelet" Feb 24 09:47:36 crc kubenswrapper[4822]: I0224 09:47:36.632525 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43656: no serving certificate available for the kubelet" Feb 24 09:47:39 crc kubenswrapper[4822]: I0224 09:47:39.559718 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43666: no serving certificate available for the kubelet" Feb 24 09:47:39 crc kubenswrapper[4822]: I0224 09:47:39.679859 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43678: no serving certificate available for the kubelet" Feb 24 09:47:42 crc kubenswrapper[4822]: I0224 09:47:42.614968 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47858: no serving certificate available for the kubelet" Feb 24 09:47:42 crc kubenswrapper[4822]: I0224 09:47:42.735705 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47870: no serving certificate available for the kubelet" Feb 24 09:47:45 crc kubenswrapper[4822]: I0224 09:47:45.337448 4822 scope.go:117] "RemoveContainer" containerID="73d09a66304a086c367c8577265ad701c4cbf69329dfde07a6ab8a962f8b647d" Feb 24 09:47:45 crc kubenswrapper[4822]: E0224 09:47:45.338073 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:47:45 crc kubenswrapper[4822]: I0224 09:47:45.675574 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47880: no serving certificate available for the kubelet" Feb 24 09:47:45 crc kubenswrapper[4822]: I0224 09:47:45.799232 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47890: no serving certificate available for the kubelet" Feb 24 09:47:48 crc kubenswrapper[4822]: I0224 09:47:48.732970 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47902: no serving certificate available for the kubelet" Feb 24 09:47:48 crc kubenswrapper[4822]: I0224 09:47:48.853876 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47916: no serving certificate available for the kubelet" Feb 24 09:47:51 crc kubenswrapper[4822]: I0224 09:47:51.825137 4822 ???:1] "http: TLS handshake error from 192.168.126.11:60158: no serving certificate available for the kubelet" Feb 24 09:47:51 crc kubenswrapper[4822]: I0224 09:47:51.909587 4822 ???:1] "http: TLS handshake error from 192.168.126.11:60164: no serving certificate available for the kubelet" Feb 24 09:47:54 crc kubenswrapper[4822]: I0224 09:47:54.886181 4822 ???:1] "http: TLS handshake error from 192.168.126.11:60168: no serving certificate available for the kubelet" Feb 24 09:47:54 crc kubenswrapper[4822]: I0224 09:47:54.963647 4822 ???:1] "http: TLS handshake error from 192.168.126.11:60184: no serving certificate available for the kubelet" Feb 24 09:47:57 crc kubenswrapper[4822]: I0224 09:47:57.337630 4822 scope.go:117] "RemoveContainer" containerID="73d09a66304a086c367c8577265ad701c4cbf69329dfde07a6ab8a962f8b647d" Feb 24 09:47:57 crc kubenswrapper[4822]: E0224 09:47:57.338550 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:47:57 crc kubenswrapper[4822]: I0224 09:47:57.929184 4822 ???:1] "http: TLS handshake error from 192.168.126.11:60192: no serving certificate available for the kubelet" Feb 24 09:47:58 crc kubenswrapper[4822]: I0224 09:47:58.017326 4822 ???:1] "http: TLS handshake error from 192.168.126.11:60194: no serving certificate available for the kubelet" Feb 24 09:48:00 crc kubenswrapper[4822]: I0224 09:48:00.964026 4822 ???:1] "http: TLS handshake error from 192.168.126.11:60198: no serving certificate available for the kubelet" Feb 24 09:48:01 crc kubenswrapper[4822]: I0224 09:48:01.047754 4822 ???:1] "http: TLS handshake error from 192.168.126.11:60202: no serving certificate available for the kubelet" Feb 24 09:48:04 crc kubenswrapper[4822]: I0224 09:48:04.015368 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38526: no serving certificate available for the kubelet" Feb 24 09:48:04 crc kubenswrapper[4822]: I0224 09:48:04.084246 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38528: no serving certificate available for the kubelet" Feb 24 09:48:07 crc kubenswrapper[4822]: I0224 09:48:07.071466 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38532: no serving certificate available for the kubelet" Feb 24 09:48:07 crc kubenswrapper[4822]: I0224 09:48:07.130278 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38544: no serving certificate available for the kubelet" Feb 24 09:48:10 crc kubenswrapper[4822]: I0224 09:48:10.105802 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38556: no serving certificate available for the kubelet" Feb 24 09:48:10 crc kubenswrapper[4822]: I0224 09:48:10.166503 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38568: no serving certificate available for the kubelet" Feb 24 09:48:12 crc kubenswrapper[4822]: I0224 09:48:12.337667 4822 scope.go:117] "RemoveContainer" containerID="73d09a66304a086c367c8577265ad701c4cbf69329dfde07a6ab8a962f8b647d" Feb 24 09:48:12 crc kubenswrapper[4822]: E0224 09:48:12.338625 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:48:13 crc kubenswrapper[4822]: I0224 09:48:13.162244 4822 ???:1] "http: TLS handshake error from 192.168.126.11:41084: no serving certificate available for the kubelet" Feb 24 09:48:13 crc kubenswrapper[4822]: I0224 09:48:13.229182 4822 ???:1] "http: TLS handshake error from 192.168.126.11:41096: no serving certificate available for the kubelet" Feb 24 09:48:16 crc kubenswrapper[4822]: I0224 09:48:16.224970 4822 ???:1] "http: TLS handshake error from 192.168.126.11:41102: no serving certificate available for the kubelet" Feb 24 09:48:16 crc kubenswrapper[4822]: I0224 09:48:16.288190 4822 ???:1] "http: TLS handshake error from 192.168.126.11:41106: no serving certificate available for the kubelet" Feb 24 09:48:19 crc kubenswrapper[4822]: I0224 09:48:19.266530 4822 ???:1] "http: TLS handshake error from 192.168.126.11:41116: no serving certificate available for the kubelet" Feb 24 09:48:19 crc kubenswrapper[4822]: I0224 09:48:19.324662 4822 ???:1] "http: TLS handshake error from 192.168.126.11:41130: no serving certificate available for the kubelet" Feb 24 09:48:22 crc kubenswrapper[4822]: I0224 09:48:22.318704 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38258: no serving certificate available for the kubelet" Feb 24 09:48:22 crc kubenswrapper[4822]: I0224 09:48:22.367049 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38274: no serving certificate available for the kubelet" Feb 24 09:48:25 crc kubenswrapper[4822]: I0224 09:48:25.372399 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38284: no serving certificate available for the kubelet" Feb 24 09:48:25 crc kubenswrapper[4822]: I0224 09:48:25.416231 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38290: no serving certificate available for the kubelet" Feb 24 09:48:27 crc kubenswrapper[4822]: I0224 09:48:27.338262 4822 scope.go:117] "RemoveContainer" containerID="73d09a66304a086c367c8577265ad701c4cbf69329dfde07a6ab8a962f8b647d" Feb 24 09:48:27 crc kubenswrapper[4822]: E0224 09:48:27.339096 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:48:28 crc kubenswrapper[4822]: I0224 09:48:28.431728 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38296: no serving certificate available for the kubelet" Feb 24 09:48:28 crc kubenswrapper[4822]: I0224 09:48:28.489446 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38306: no serving certificate available for the kubelet" Feb 24 09:48:31 crc kubenswrapper[4822]: I0224 09:48:31.496717 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57608: no serving certificate available for the kubelet" Feb 24 09:48:31 crc kubenswrapper[4822]: I0224 09:48:31.537459 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57620: no serving certificate available for the kubelet" Feb 24 09:48:34 crc kubenswrapper[4822]: I0224 09:48:34.592835 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57630: no serving certificate available for the kubelet" Feb 24 09:48:34 crc kubenswrapper[4822]: I0224 09:48:34.647287 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57636: no serving certificate available for the kubelet" Feb 24 09:48:37 crc kubenswrapper[4822]: I0224 09:48:37.650180 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57644: no serving certificate available for the kubelet" Feb 24 09:48:37 crc kubenswrapper[4822]: I0224 09:48:37.705419 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57652: no serving certificate available for the kubelet" Feb 24 09:48:39 crc kubenswrapper[4822]: I0224 09:48:39.337517 4822 scope.go:117] "RemoveContainer" containerID="73d09a66304a086c367c8577265ad701c4cbf69329dfde07a6ab8a962f8b647d" Feb 24 09:48:39 crc kubenswrapper[4822]: E0224 09:48:39.338487 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:48:40 crc kubenswrapper[4822]: I0224 09:48:40.692271 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57666: no serving certificate available for the kubelet" Feb 24 09:48:40 crc kubenswrapper[4822]: I0224 09:48:40.754104 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57670: no serving certificate available for the kubelet" Feb 24 09:48:43 crc kubenswrapper[4822]: I0224 09:48:43.754168 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34704: no serving certificate available for the kubelet" Feb 24 09:48:43 crc kubenswrapper[4822]: I0224 09:48:43.819668 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34716: no serving certificate available for the kubelet" Feb 24 09:48:46 crc kubenswrapper[4822]: I0224 09:48:46.810297 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34726: no serving certificate available for the kubelet" Feb 24 09:48:46 crc kubenswrapper[4822]: I0224 09:48:46.875844 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34736: no serving certificate available for the kubelet" Feb 24 09:48:49 crc kubenswrapper[4822]: I0224 09:48:49.893141 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34740: no serving certificate available for the kubelet" Feb 24 09:48:49 crc kubenswrapper[4822]: I0224 09:48:49.948443 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34750: no serving certificate available for the kubelet" Feb 24 09:48:52 crc kubenswrapper[4822]: I0224 09:48:52.956823 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59862: no serving certificate available for the kubelet" Feb 24 09:48:53 crc kubenswrapper[4822]: I0224 09:48:53.008035 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59866: no serving certificate available for the kubelet" Feb 24 09:48:53 crc kubenswrapper[4822]: I0224 09:48:53.337839 4822 scope.go:117] "RemoveContainer" containerID="73d09a66304a086c367c8577265ad701c4cbf69329dfde07a6ab8a962f8b647d" Feb 24 09:48:53 crc kubenswrapper[4822]: E0224 09:48:53.338401 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:48:56 crc kubenswrapper[4822]: I0224 09:48:56.013013 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59882: no serving certificate available for the kubelet" Feb 24 09:48:56 crc kubenswrapper[4822]: I0224 09:48:56.068243 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59888: no serving certificate available for the kubelet" Feb 24 09:48:59 crc kubenswrapper[4822]: I0224 09:48:59.073256 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59896: no serving certificate available for the kubelet" Feb 24 09:48:59 crc kubenswrapper[4822]: I0224 09:48:59.128409 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59904: no serving certificate available for the kubelet" Feb 24 09:49:02 crc kubenswrapper[4822]: I0224 09:49:02.135072 4822 ???:1] "http: TLS handshake error from 192.168.126.11:41718: no serving certificate available for the kubelet" Feb 24 09:49:02 crc kubenswrapper[4822]: I0224 09:49:02.188499 4822 ???:1] "http: TLS handshake error from 192.168.126.11:41734: no serving certificate available for the kubelet" Feb 24 09:49:05 crc kubenswrapper[4822]: I0224 09:49:05.191351 4822 ???:1] "http: TLS handshake error from 192.168.126.11:41750: no serving certificate available for the kubelet" Feb 24 09:49:05 crc kubenswrapper[4822]: I0224 09:49:05.244144 4822 ???:1] "http: TLS handshake error from 192.168.126.11:41752: no serving certificate available for the kubelet" Feb 24 09:49:08 crc kubenswrapper[4822]: I0224 09:49:08.228330 4822 ???:1] "http: TLS handshake error from 192.168.126.11:41754: no serving certificate available for the kubelet" Feb 24 09:49:08 crc kubenswrapper[4822]: I0224 09:49:08.291881 4822 ???:1] "http: TLS handshake error from 192.168.126.11:41770: no serving certificate available for the kubelet" Feb 24 09:49:08 crc kubenswrapper[4822]: I0224 09:49:08.346455 4822 scope.go:117] "RemoveContainer" containerID="73d09a66304a086c367c8577265ad701c4cbf69329dfde07a6ab8a962f8b647d" Feb 24 09:49:08 crc kubenswrapper[4822]: E0224 09:49:08.347013 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:49:11 crc kubenswrapper[4822]: I0224 09:49:11.287709 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50892: no serving certificate available for the kubelet" Feb 24 09:49:11 crc kubenswrapper[4822]: I0224 09:49:11.352677 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50908: no serving certificate available for the kubelet" Feb 24 09:49:14 crc kubenswrapper[4822]: I0224 09:49:14.350649 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50914: no serving certificate available for the kubelet" Feb 24 09:49:14 crc kubenswrapper[4822]: I0224 09:49:14.404591 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50916: no serving certificate available for the kubelet" Feb 24 09:49:17 crc kubenswrapper[4822]: I0224 09:49:17.411271 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50924: no serving certificate available for the kubelet" Feb 24 09:49:17 crc kubenswrapper[4822]: I0224 09:49:17.520093 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50936: no serving certificate available for the kubelet" Feb 24 09:49:20 crc kubenswrapper[4822]: I0224 09:49:20.467418 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50942: no serving certificate available for the kubelet" Feb 24 09:49:20 crc kubenswrapper[4822]: I0224 09:49:20.579352 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50948: no serving certificate available for the kubelet" Feb 24 09:49:23 crc kubenswrapper[4822]: I0224 09:49:23.337510 4822 scope.go:117] "RemoveContainer" containerID="73d09a66304a086c367c8577265ad701c4cbf69329dfde07a6ab8a962f8b647d" Feb 24 09:49:23 crc kubenswrapper[4822]: E0224 09:49:23.337900 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:49:23 crc kubenswrapper[4822]: I0224 09:49:23.522451 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39706: no serving certificate available for the kubelet" Feb 24 09:49:23 crc kubenswrapper[4822]: I0224 09:49:23.634990 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39722: no serving certificate available for the kubelet" Feb 24 09:49:26 crc kubenswrapper[4822]: I0224 09:49:26.574057 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39736: no serving certificate available for the kubelet" Feb 24 09:49:26 crc kubenswrapper[4822]: I0224 09:49:26.680478 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39744: no serving certificate available for the kubelet" Feb 24 09:49:29 crc kubenswrapper[4822]: I0224 09:49:29.628627 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39746: no serving certificate available for the kubelet" Feb 24 09:49:29 crc kubenswrapper[4822]: I0224 09:49:29.748723 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39760: no serving certificate available for the kubelet" Feb 24 09:49:32 crc kubenswrapper[4822]: I0224 09:49:32.686338 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49682: no serving certificate available for the kubelet" Feb 24 09:49:32 crc kubenswrapper[4822]: I0224 09:49:32.802216 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49692: no serving certificate available for the kubelet" Feb 24 09:49:35 crc kubenswrapper[4822]: I0224 09:49:35.741612 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49698: no serving certificate available for the kubelet" Feb 24 09:49:35 crc kubenswrapper[4822]: I0224 09:49:35.861251 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49714: no serving certificate available for the kubelet" Feb 24 09:49:37 crc kubenswrapper[4822]: I0224 09:49:37.337516 4822 scope.go:117] "RemoveContainer" containerID="73d09a66304a086c367c8577265ad701c4cbf69329dfde07a6ab8a962f8b647d" Feb 24 09:49:37 crc kubenswrapper[4822]: E0224 09:49:37.338267 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:49:38 crc kubenswrapper[4822]: I0224 09:49:38.799476 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49720: no serving certificate available for the kubelet" Feb 24 09:49:38 crc kubenswrapper[4822]: I0224 09:49:38.927719 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49726: no serving certificate available for the kubelet" Feb 24 09:49:39 crc kubenswrapper[4822]: I0224 09:49:39.568733 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kzrnj"] Feb 24 09:49:39 crc kubenswrapper[4822]: E0224 09:49:39.569355 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48d8c8d6-7a83-49d6-a79a-f97343e4868c" containerName="registry-server" Feb 24 09:49:39 crc kubenswrapper[4822]: I0224 09:49:39.569389 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="48d8c8d6-7a83-49d6-a79a-f97343e4868c" containerName="registry-server" Feb 24 09:49:39 crc kubenswrapper[4822]: E0224 09:49:39.569414 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48d8c8d6-7a83-49d6-a79a-f97343e4868c" containerName="extract-utilities" Feb 24 09:49:39 crc kubenswrapper[4822]: I0224 09:49:39.569428 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="48d8c8d6-7a83-49d6-a79a-f97343e4868c" containerName="extract-utilities" Feb 24 09:49:39 crc kubenswrapper[4822]: E0224 09:49:39.569453 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7bd1955-1e32-4114-85bc-21c1fc43ac95" containerName="extract-utilities" Feb 24 09:49:39 crc kubenswrapper[4822]: I0224 09:49:39.569465 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7bd1955-1e32-4114-85bc-21c1fc43ac95" containerName="extract-utilities" Feb 24 09:49:39 crc kubenswrapper[4822]: E0224 09:49:39.569487 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48d8c8d6-7a83-49d6-a79a-f97343e4868c" containerName="extract-content" Feb 24 09:49:39 crc kubenswrapper[4822]: I0224 09:49:39.569501 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="48d8c8d6-7a83-49d6-a79a-f97343e4868c" containerName="extract-content" Feb 24 09:49:39 crc kubenswrapper[4822]: E0224 09:49:39.569529 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66ffa1e1-3d61-4458-a48b-5364bcce0b29" containerName="extract-utilities" Feb 24 09:49:39 crc kubenswrapper[4822]: I0224 09:49:39.569541 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="66ffa1e1-3d61-4458-a48b-5364bcce0b29" containerName="extract-utilities" Feb 24 09:49:39 crc kubenswrapper[4822]: E0224 09:49:39.569569 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7bd1955-1e32-4114-85bc-21c1fc43ac95" containerName="extract-content" Feb 24 09:49:39 crc kubenswrapper[4822]: I0224 09:49:39.569584 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7bd1955-1e32-4114-85bc-21c1fc43ac95" containerName="extract-content" Feb 24 09:49:39 crc kubenswrapper[4822]: E0224 09:49:39.569605 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66ffa1e1-3d61-4458-a48b-5364bcce0b29" containerName="extract-content" Feb 24 09:49:39 crc kubenswrapper[4822]: I0224 09:49:39.569621 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="66ffa1e1-3d61-4458-a48b-5364bcce0b29" containerName="extract-content" Feb 24 09:49:39 crc kubenswrapper[4822]: E0224 09:49:39.569656 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66ffa1e1-3d61-4458-a48b-5364bcce0b29" containerName="registry-server" Feb 24 09:49:39 crc kubenswrapper[4822]: I0224 09:49:39.569674 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="66ffa1e1-3d61-4458-a48b-5364bcce0b29" containerName="registry-server" Feb 24 09:49:39 crc kubenswrapper[4822]: E0224 09:49:39.569689 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7bd1955-1e32-4114-85bc-21c1fc43ac95" containerName="registry-server" Feb 24 09:49:39 crc kubenswrapper[4822]: I0224 09:49:39.569701 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7bd1955-1e32-4114-85bc-21c1fc43ac95" containerName="registry-server" Feb 24 09:49:39 crc kubenswrapper[4822]: I0224 09:49:39.570055 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7bd1955-1e32-4114-85bc-21c1fc43ac95" containerName="registry-server" Feb 24 09:49:39 crc kubenswrapper[4822]: I0224 09:49:39.570089 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="48d8c8d6-7a83-49d6-a79a-f97343e4868c" containerName="registry-server" Feb 24 09:49:39 crc kubenswrapper[4822]: I0224 09:49:39.570152 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="66ffa1e1-3d61-4458-a48b-5364bcce0b29" containerName="registry-server" Feb 24 09:49:39 crc kubenswrapper[4822]: I0224 09:49:39.572060 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kzrnj" Feb 24 09:49:39 crc kubenswrapper[4822]: I0224 09:49:39.594190 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kzrnj"] Feb 24 09:49:39 crc kubenswrapper[4822]: I0224 09:49:39.707534 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/836e2fc9-8245-4af3-bea3-cd599aba611a-catalog-content\") pod \"redhat-operators-kzrnj\" (UID: \"836e2fc9-8245-4af3-bea3-cd599aba611a\") " pod="openshift-marketplace/redhat-operators-kzrnj" Feb 24 09:49:39 crc kubenswrapper[4822]: I0224 09:49:39.708004 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dxrr\" (UniqueName: \"kubernetes.io/projected/836e2fc9-8245-4af3-bea3-cd599aba611a-kube-api-access-2dxrr\") pod \"redhat-operators-kzrnj\" (UID: \"836e2fc9-8245-4af3-bea3-cd599aba611a\") " pod="openshift-marketplace/redhat-operators-kzrnj" Feb 24 09:49:39 crc kubenswrapper[4822]: I0224 09:49:39.708050 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/836e2fc9-8245-4af3-bea3-cd599aba611a-utilities\") pod \"redhat-operators-kzrnj\" (UID: \"836e2fc9-8245-4af3-bea3-cd599aba611a\") " pod="openshift-marketplace/redhat-operators-kzrnj" Feb 24 09:49:39 crc kubenswrapper[4822]: I0224 09:49:39.809808 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/836e2fc9-8245-4af3-bea3-cd599aba611a-catalog-content\") pod \"redhat-operators-kzrnj\" (UID: \"836e2fc9-8245-4af3-bea3-cd599aba611a\") " pod="openshift-marketplace/redhat-operators-kzrnj" Feb 24 09:49:39 crc kubenswrapper[4822]: I0224 09:49:39.809982 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dxrr\" (UniqueName: \"kubernetes.io/projected/836e2fc9-8245-4af3-bea3-cd599aba611a-kube-api-access-2dxrr\") pod \"redhat-operators-kzrnj\" (UID: \"836e2fc9-8245-4af3-bea3-cd599aba611a\") " pod="openshift-marketplace/redhat-operators-kzrnj" Feb 24 09:49:39 crc kubenswrapper[4822]: I0224 09:49:39.810043 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/836e2fc9-8245-4af3-bea3-cd599aba611a-utilities\") pod \"redhat-operators-kzrnj\" (UID: \"836e2fc9-8245-4af3-bea3-cd599aba611a\") " pod="openshift-marketplace/redhat-operators-kzrnj" Feb 24 09:49:39 crc kubenswrapper[4822]: I0224 09:49:39.810688 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/836e2fc9-8245-4af3-bea3-cd599aba611a-utilities\") pod \"redhat-operators-kzrnj\" (UID: \"836e2fc9-8245-4af3-bea3-cd599aba611a\") " pod="openshift-marketplace/redhat-operators-kzrnj" Feb 24 09:49:39 crc kubenswrapper[4822]: I0224 09:49:39.810688 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/836e2fc9-8245-4af3-bea3-cd599aba611a-catalog-content\") pod \"redhat-operators-kzrnj\" (UID: \"836e2fc9-8245-4af3-bea3-cd599aba611a\") " pod="openshift-marketplace/redhat-operators-kzrnj" Feb 24 09:49:39 crc kubenswrapper[4822]: I0224 09:49:39.831791 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dxrr\" (UniqueName: \"kubernetes.io/projected/836e2fc9-8245-4af3-bea3-cd599aba611a-kube-api-access-2dxrr\") pod \"redhat-operators-kzrnj\" (UID: \"836e2fc9-8245-4af3-bea3-cd599aba611a\") " pod="openshift-marketplace/redhat-operators-kzrnj" Feb 24 09:49:39 crc kubenswrapper[4822]: I0224 09:49:39.913580 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kzrnj" Feb 24 09:49:40 crc kubenswrapper[4822]: I0224 09:49:40.406318 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kzrnj"] Feb 24 09:49:41 crc kubenswrapper[4822]: I0224 09:49:41.270409 4822 generic.go:334] "Generic (PLEG): container finished" podID="836e2fc9-8245-4af3-bea3-cd599aba611a" containerID="8f42f2720bdbbc58ae807c6d7064a043fe4117fd563dea2747c2c3e64cd69fcc" exitCode=0 Feb 24 09:49:41 crc kubenswrapper[4822]: I0224 09:49:41.270592 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kzrnj" event={"ID":"836e2fc9-8245-4af3-bea3-cd599aba611a","Type":"ContainerDied","Data":"8f42f2720bdbbc58ae807c6d7064a043fe4117fd563dea2747c2c3e64cd69fcc"} Feb 24 09:49:41 crc kubenswrapper[4822]: I0224 09:49:41.270779 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kzrnj" event={"ID":"836e2fc9-8245-4af3-bea3-cd599aba611a","Type":"ContainerStarted","Data":"dcf999008f30e8f65a608d61a41ac10f5769551a2bbb806e3e5850f4890cf77b"} Feb 24 09:49:41 crc kubenswrapper[4822]: I0224 09:49:41.862127 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45324: no serving certificate available for the kubelet" Feb 24 09:49:41 crc kubenswrapper[4822]: I0224 09:49:41.991337 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45326: no serving certificate available for the kubelet" Feb 24 09:49:42 crc kubenswrapper[4822]: I0224 09:49:42.280989 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kzrnj" event={"ID":"836e2fc9-8245-4af3-bea3-cd599aba611a","Type":"ContainerStarted","Data":"b13deb2a7e8c97117af0f1654f65798566841327e97634d306ac34b965fbf78b"} Feb 24 09:49:43 crc kubenswrapper[4822]: I0224 09:49:43.293995 4822 generic.go:334] "Generic (PLEG): container finished" podID="836e2fc9-8245-4af3-bea3-cd599aba611a" containerID="b13deb2a7e8c97117af0f1654f65798566841327e97634d306ac34b965fbf78b" exitCode=0 Feb 24 09:49:43 crc kubenswrapper[4822]: I0224 09:49:43.294071 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kzrnj" event={"ID":"836e2fc9-8245-4af3-bea3-cd599aba611a","Type":"ContainerDied","Data":"b13deb2a7e8c97117af0f1654f65798566841327e97634d306ac34b965fbf78b"} Feb 24 09:49:44 crc kubenswrapper[4822]: I0224 09:49:44.307382 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kzrnj" event={"ID":"836e2fc9-8245-4af3-bea3-cd599aba611a","Type":"ContainerStarted","Data":"49fc1debb58a253cbc8a4fa904d3198c340c3bc1d622c3372c8edc27c6b6af59"} Feb 24 09:49:44 crc kubenswrapper[4822]: I0224 09:49:44.336472 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kzrnj" podStartSLOduration=2.893125876 podStartE2EDuration="5.33645539s" podCreationTimestamp="2026-02-24 09:49:39 +0000 UTC" firstStartedPulling="2026-02-24 09:49:41.27265275 +0000 UTC m=+2503.660415338" lastFinishedPulling="2026-02-24 09:49:43.715982264 +0000 UTC m=+2506.103744852" observedRunningTime="2026-02-24 09:49:44.329997774 +0000 UTC m=+2506.717760342" watchObservedRunningTime="2026-02-24 09:49:44.33645539 +0000 UTC m=+2506.724217938" Feb 24 09:49:44 crc kubenswrapper[4822]: I0224 09:49:44.991830 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45328: no serving certificate available for the kubelet" Feb 24 09:49:45 crc kubenswrapper[4822]: I0224 09:49:45.034460 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45332: no serving certificate available for the kubelet" Feb 24 09:49:48 crc kubenswrapper[4822]: I0224 09:49:48.040250 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45338: no serving certificate available for the kubelet" Feb 24 09:49:48 crc kubenswrapper[4822]: I0224 09:49:48.086517 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45352: no serving certificate available for the kubelet" Feb 24 09:49:49 crc kubenswrapper[4822]: I0224 09:49:49.914090 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kzrnj" Feb 24 09:49:49 crc kubenswrapper[4822]: I0224 09:49:49.914140 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kzrnj" Feb 24 09:49:50 crc kubenswrapper[4822]: I0224 09:49:50.337575 4822 scope.go:117] "RemoveContainer" containerID="73d09a66304a086c367c8577265ad701c4cbf69329dfde07a6ab8a962f8b647d" Feb 24 09:49:50 crc kubenswrapper[4822]: E0224 09:49:50.338382 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:49:50 crc kubenswrapper[4822]: I0224 09:49:50.987007 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kzrnj" podUID="836e2fc9-8245-4af3-bea3-cd599aba611a" containerName="registry-server" probeResult="failure" output=< Feb 24 09:49:50 crc kubenswrapper[4822]: timeout: failed to connect service ":50051" within 1s Feb 24 09:49:50 crc kubenswrapper[4822]: > Feb 24 09:49:51 crc kubenswrapper[4822]: I0224 09:49:51.100359 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54278: no serving certificate available for the kubelet" Feb 24 09:49:51 crc kubenswrapper[4822]: I0224 09:49:51.157125 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54294: no serving certificate available for the kubelet" Feb 24 09:49:54 crc kubenswrapper[4822]: I0224 09:49:54.149734 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54296: no serving certificate available for the kubelet" Feb 24 09:49:54 crc kubenswrapper[4822]: I0224 09:49:54.201606 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54304: no serving certificate available for the kubelet" Feb 24 09:49:54 crc kubenswrapper[4822]: I0224 09:49:54.704194 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/openstack-cell1-galera-0" podUID="3ff049ae-9abb-4477-9f51-eee7228cedfd" containerName="galera" probeResult="failure" output=< Feb 24 09:49:54 crc kubenswrapper[4822]: waiting for gcomm URI Feb 24 09:49:54 crc kubenswrapper[4822]: > Feb 24 09:49:54 crc kubenswrapper[4822]: I0224 09:49:54.704273 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 24 09:49:54 crc kubenswrapper[4822]: I0224 09:49:54.704807 4822 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"74f3cbe618e9630145cc19ece8102172932f857f6fa57e95aa6b9ffa71795cd6"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed startup probe, will be restarted" Feb 24 09:49:54 crc kubenswrapper[4822]: I0224 09:49:54.780248 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="3ff049ae-9abb-4477-9f51-eee7228cedfd" containerName="galera" containerID="cri-o://74f3cbe618e9630145cc19ece8102172932f857f6fa57e95aa6b9ffa71795cd6" gracePeriod=30 Feb 24 09:49:55 crc kubenswrapper[4822]: I0224 09:49:55.435833 4822 generic.go:334] "Generic (PLEG): container finished" podID="3ff049ae-9abb-4477-9f51-eee7228cedfd" containerID="74f3cbe618e9630145cc19ece8102172932f857f6fa57e95aa6b9ffa71795cd6" exitCode=143 Feb 24 09:49:55 crc kubenswrapper[4822]: I0224 09:49:55.435964 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3ff049ae-9abb-4477-9f51-eee7228cedfd","Type":"ContainerDied","Data":"74f3cbe618e9630145cc19ece8102172932f857f6fa57e95aa6b9ffa71795cd6"} Feb 24 09:49:55 crc kubenswrapper[4822]: I0224 09:49:55.436115 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3ff049ae-9abb-4477-9f51-eee7228cedfd","Type":"ContainerStarted","Data":"e8262553ae45566fa847c362fce596c8e0e36bcbf4f34da176d8086e02e19351"} Feb 24 09:49:55 crc kubenswrapper[4822]: I0224 09:49:55.436137 4822 scope.go:117] "RemoveContainer" containerID="35907b3d39bb9c85bb0ac876638478962e23a772b0887ea6e2d711c6c90b5ad8" Feb 24 09:49:57 crc kubenswrapper[4822]: I0224 09:49:57.201373 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54318: no serving certificate available for the kubelet" Feb 24 09:49:57 crc kubenswrapper[4822]: I0224 09:49:57.269529 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54320: no serving certificate available for the kubelet" Feb 24 09:50:00 crc kubenswrapper[4822]: I0224 09:50:00.001531 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kzrnj" Feb 24 09:50:00 crc kubenswrapper[4822]: I0224 09:50:00.070532 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kzrnj" Feb 24 09:50:00 crc kubenswrapper[4822]: I0224 09:50:00.249091 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kzrnj"] Feb 24 09:50:00 crc kubenswrapper[4822]: I0224 09:50:00.262158 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54326: no serving certificate available for the kubelet" Feb 24 09:50:00 crc kubenswrapper[4822]: I0224 09:50:00.317207 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54338: no serving certificate available for the kubelet" Feb 24 09:50:01 crc kubenswrapper[4822]: I0224 09:50:01.292598 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/openstack-galera-0" podUID="92cdd045-d60c-433d-b2e3-32f93299ee8e" containerName="galera" probeResult="failure" output=< Feb 24 09:50:01 crc kubenswrapper[4822]: waiting for gcomm URI Feb 24 09:50:01 crc kubenswrapper[4822]: > Feb 24 09:50:01 crc kubenswrapper[4822]: I0224 09:50:01.294457 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 24 09:50:01 crc kubenswrapper[4822]: I0224 09:50:01.296579 4822 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"a62ac07f03c3cf24264a4baf203c891cea64e1dc745db8ec0d97bb7e8a039dd1"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed startup probe, will be restarted" Feb 24 09:50:01 crc kubenswrapper[4822]: I0224 09:50:01.389523 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="92cdd045-d60c-433d-b2e3-32f93299ee8e" containerName="galera" containerID="cri-o://a62ac07f03c3cf24264a4baf203c891cea64e1dc745db8ec0d97bb7e8a039dd1" gracePeriod=30 Feb 24 09:50:01 crc kubenswrapper[4822]: I0224 09:50:01.496074 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kzrnj" podUID="836e2fc9-8245-4af3-bea3-cd599aba611a" containerName="registry-server" containerID="cri-o://49fc1debb58a253cbc8a4fa904d3198c340c3bc1d622c3372c8edc27c6b6af59" gracePeriod=2 Feb 24 09:50:01 crc kubenswrapper[4822]: I0224 09:50:01.977333 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kzrnj" Feb 24 09:50:02 crc kubenswrapper[4822]: I0224 09:50:02.108086 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/836e2fc9-8245-4af3-bea3-cd599aba611a-utilities\") pod \"836e2fc9-8245-4af3-bea3-cd599aba611a\" (UID: \"836e2fc9-8245-4af3-bea3-cd599aba611a\") " Feb 24 09:50:02 crc kubenswrapper[4822]: I0224 09:50:02.108236 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dxrr\" (UniqueName: \"kubernetes.io/projected/836e2fc9-8245-4af3-bea3-cd599aba611a-kube-api-access-2dxrr\") pod \"836e2fc9-8245-4af3-bea3-cd599aba611a\" (UID: \"836e2fc9-8245-4af3-bea3-cd599aba611a\") " Feb 24 09:50:02 crc kubenswrapper[4822]: I0224 09:50:02.108444 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/836e2fc9-8245-4af3-bea3-cd599aba611a-catalog-content\") pod \"836e2fc9-8245-4af3-bea3-cd599aba611a\" (UID: \"836e2fc9-8245-4af3-bea3-cd599aba611a\") " Feb 24 09:50:02 crc kubenswrapper[4822]: I0224 09:50:02.109600 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/836e2fc9-8245-4af3-bea3-cd599aba611a-utilities" (OuterVolumeSpecName: "utilities") pod "836e2fc9-8245-4af3-bea3-cd599aba611a" (UID: "836e2fc9-8245-4af3-bea3-cd599aba611a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:50:02 crc kubenswrapper[4822]: I0224 09:50:02.118193 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/836e2fc9-8245-4af3-bea3-cd599aba611a-kube-api-access-2dxrr" (OuterVolumeSpecName: "kube-api-access-2dxrr") pod "836e2fc9-8245-4af3-bea3-cd599aba611a" (UID: "836e2fc9-8245-4af3-bea3-cd599aba611a"). InnerVolumeSpecName "kube-api-access-2dxrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:50:02 crc kubenswrapper[4822]: I0224 09:50:02.210865 4822 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/836e2fc9-8245-4af3-bea3-cd599aba611a-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:50:02 crc kubenswrapper[4822]: I0224 09:50:02.210944 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dxrr\" (UniqueName: \"kubernetes.io/projected/836e2fc9-8245-4af3-bea3-cd599aba611a-kube-api-access-2dxrr\") on node \"crc\" DevicePath \"\"" Feb 24 09:50:02 crc kubenswrapper[4822]: I0224 09:50:02.238502 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/836e2fc9-8245-4af3-bea3-cd599aba611a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "836e2fc9-8245-4af3-bea3-cd599aba611a" (UID: "836e2fc9-8245-4af3-bea3-cd599aba611a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:50:02 crc kubenswrapper[4822]: I0224 09:50:02.312774 4822 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/836e2fc9-8245-4af3-bea3-cd599aba611a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:50:02 crc kubenswrapper[4822]: I0224 09:50:02.508297 4822 generic.go:334] "Generic (PLEG): container finished" podID="836e2fc9-8245-4af3-bea3-cd599aba611a" containerID="49fc1debb58a253cbc8a4fa904d3198c340c3bc1d622c3372c8edc27c6b6af59" exitCode=0 Feb 24 09:50:02 crc kubenswrapper[4822]: I0224 09:50:02.508350 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kzrnj" Feb 24 09:50:02 crc kubenswrapper[4822]: I0224 09:50:02.508373 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kzrnj" event={"ID":"836e2fc9-8245-4af3-bea3-cd599aba611a","Type":"ContainerDied","Data":"49fc1debb58a253cbc8a4fa904d3198c340c3bc1d622c3372c8edc27c6b6af59"} Feb 24 09:50:02 crc kubenswrapper[4822]: I0224 09:50:02.508692 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kzrnj" event={"ID":"836e2fc9-8245-4af3-bea3-cd599aba611a","Type":"ContainerDied","Data":"dcf999008f30e8f65a608d61a41ac10f5769551a2bbb806e3e5850f4890cf77b"} Feb 24 09:50:02 crc kubenswrapper[4822]: I0224 09:50:02.508713 4822 scope.go:117] "RemoveContainer" containerID="49fc1debb58a253cbc8a4fa904d3198c340c3bc1d622c3372c8edc27c6b6af59" Feb 24 09:50:02 crc kubenswrapper[4822]: I0224 09:50:02.512860 4822 generic.go:334] "Generic (PLEG): container finished" podID="92cdd045-d60c-433d-b2e3-32f93299ee8e" containerID="a62ac07f03c3cf24264a4baf203c891cea64e1dc745db8ec0d97bb7e8a039dd1" exitCode=143 Feb 24 09:50:02 crc kubenswrapper[4822]: I0224 09:50:02.512906 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"92cdd045-d60c-433d-b2e3-32f93299ee8e","Type":"ContainerDied","Data":"a62ac07f03c3cf24264a4baf203c891cea64e1dc745db8ec0d97bb7e8a039dd1"} Feb 24 09:50:02 crc kubenswrapper[4822]: I0224 09:50:02.512964 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"92cdd045-d60c-433d-b2e3-32f93299ee8e","Type":"ContainerStarted","Data":"a2afc6862e9196c8b25ff4a3dadf65069e66cf79d0505828a5b055a7d9ff368e"} Feb 24 09:50:02 crc kubenswrapper[4822]: I0224 09:50:02.534618 4822 scope.go:117] "RemoveContainer" containerID="b13deb2a7e8c97117af0f1654f65798566841327e97634d306ac34b965fbf78b" Feb 24 09:50:02 crc kubenswrapper[4822]: I0224 09:50:02.535677 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kzrnj"] Feb 24 09:50:02 crc kubenswrapper[4822]: I0224 09:50:02.542211 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kzrnj"] Feb 24 09:50:02 crc kubenswrapper[4822]: I0224 09:50:02.561380 4822 scope.go:117] "RemoveContainer" containerID="8f42f2720bdbbc58ae807c6d7064a043fe4117fd563dea2747c2c3e64cd69fcc" Feb 24 09:50:02 crc kubenswrapper[4822]: I0224 09:50:02.593542 4822 scope.go:117] "RemoveContainer" containerID="49fc1debb58a253cbc8a4fa904d3198c340c3bc1d622c3372c8edc27c6b6af59" Feb 24 09:50:02 crc kubenswrapper[4822]: E0224 09:50:02.594025 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49fc1debb58a253cbc8a4fa904d3198c340c3bc1d622c3372c8edc27c6b6af59\": container with ID starting with 49fc1debb58a253cbc8a4fa904d3198c340c3bc1d622c3372c8edc27c6b6af59 not found: ID does not exist" containerID="49fc1debb58a253cbc8a4fa904d3198c340c3bc1d622c3372c8edc27c6b6af59" Feb 24 09:50:02 crc kubenswrapper[4822]: I0224 09:50:02.594098 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49fc1debb58a253cbc8a4fa904d3198c340c3bc1d622c3372c8edc27c6b6af59"} err="failed to get container status \"49fc1debb58a253cbc8a4fa904d3198c340c3bc1d622c3372c8edc27c6b6af59\": rpc error: code = NotFound desc = could not find container \"49fc1debb58a253cbc8a4fa904d3198c340c3bc1d622c3372c8edc27c6b6af59\": container with ID starting with 49fc1debb58a253cbc8a4fa904d3198c340c3bc1d622c3372c8edc27c6b6af59 not found: ID does not exist" Feb 24 09:50:02 crc kubenswrapper[4822]: I0224 09:50:02.594146 4822 scope.go:117] "RemoveContainer" containerID="b13deb2a7e8c97117af0f1654f65798566841327e97634d306ac34b965fbf78b" Feb 24 09:50:02 crc kubenswrapper[4822]: E0224 09:50:02.594488 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b13deb2a7e8c97117af0f1654f65798566841327e97634d306ac34b965fbf78b\": container with ID starting with b13deb2a7e8c97117af0f1654f65798566841327e97634d306ac34b965fbf78b not found: ID does not exist" containerID="b13deb2a7e8c97117af0f1654f65798566841327e97634d306ac34b965fbf78b" Feb 24 09:50:02 crc kubenswrapper[4822]: I0224 09:50:02.594529 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b13deb2a7e8c97117af0f1654f65798566841327e97634d306ac34b965fbf78b"} err="failed to get container status \"b13deb2a7e8c97117af0f1654f65798566841327e97634d306ac34b965fbf78b\": rpc error: code = NotFound desc = could not find container \"b13deb2a7e8c97117af0f1654f65798566841327e97634d306ac34b965fbf78b\": container with ID starting with b13deb2a7e8c97117af0f1654f65798566841327e97634d306ac34b965fbf78b not found: ID does not exist" Feb 24 09:50:02 crc kubenswrapper[4822]: I0224 09:50:02.594549 4822 scope.go:117] "RemoveContainer" containerID="8f42f2720bdbbc58ae807c6d7064a043fe4117fd563dea2747c2c3e64cd69fcc" Feb 24 09:50:02 crc kubenswrapper[4822]: E0224 09:50:02.594822 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f42f2720bdbbc58ae807c6d7064a043fe4117fd563dea2747c2c3e64cd69fcc\": container with ID starting with 8f42f2720bdbbc58ae807c6d7064a043fe4117fd563dea2747c2c3e64cd69fcc not found: ID does not exist" containerID="8f42f2720bdbbc58ae807c6d7064a043fe4117fd563dea2747c2c3e64cd69fcc" Feb 24 09:50:02 crc kubenswrapper[4822]: I0224 09:50:02.594856 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f42f2720bdbbc58ae807c6d7064a043fe4117fd563dea2747c2c3e64cd69fcc"} err="failed to get container status \"8f42f2720bdbbc58ae807c6d7064a043fe4117fd563dea2747c2c3e64cd69fcc\": rpc error: code = NotFound desc = could not find container \"8f42f2720bdbbc58ae807c6d7064a043fe4117fd563dea2747c2c3e64cd69fcc\": container with ID starting with 8f42f2720bdbbc58ae807c6d7064a043fe4117fd563dea2747c2c3e64cd69fcc not found: ID does not exist" Feb 24 09:50:02 crc kubenswrapper[4822]: I0224 09:50:02.594881 4822 scope.go:117] "RemoveContainer" containerID="6b69074a3b428cba81ebaa5ad1ff74cedfa8f1f70d4c7ccaa9e6620c049c5a01" Feb 24 09:50:03 crc kubenswrapper[4822]: I0224 09:50:03.057497 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 24 09:50:03 crc kubenswrapper[4822]: I0224 09:50:03.058878 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 24 09:50:03 crc kubenswrapper[4822]: I0224 09:50:03.316391 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47622: no serving certificate available for the kubelet" Feb 24 09:50:03 crc kubenswrapper[4822]: I0224 09:50:03.338341 4822 scope.go:117] "RemoveContainer" containerID="73d09a66304a086c367c8577265ad701c4cbf69329dfde07a6ab8a962f8b647d" Feb 24 09:50:03 crc kubenswrapper[4822]: E0224 09:50:03.338629 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:50:03 crc kubenswrapper[4822]: I0224 09:50:03.369421 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47632: no serving certificate available for the kubelet" Feb 24 09:50:04 crc kubenswrapper[4822]: I0224 09:50:04.349986 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="836e2fc9-8245-4af3-bea3-cd599aba611a" path="/var/lib/kubelet/pods/836e2fc9-8245-4af3-bea3-cd599aba611a/volumes" Feb 24 09:50:06 crc kubenswrapper[4822]: I0224 09:50:06.389671 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47638: no serving certificate available for the kubelet" Feb 24 09:50:06 crc kubenswrapper[4822]: I0224 09:50:06.445091 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47646: no serving certificate available for the kubelet" Feb 24 09:50:09 crc kubenswrapper[4822]: I0224 09:50:09.446348 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47654: no serving certificate available for the kubelet" Feb 24 09:50:09 crc kubenswrapper[4822]: I0224 09:50:09.510006 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47656: no serving certificate available for the kubelet" Feb 24 09:50:11 crc kubenswrapper[4822]: I0224 09:50:11.616556 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 24 09:50:11 crc kubenswrapper[4822]: I0224 09:50:11.616901 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 24 09:50:12 crc kubenswrapper[4822]: I0224 09:50:12.483682 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46320: no serving certificate available for the kubelet" Feb 24 09:50:12 crc kubenswrapper[4822]: I0224 09:50:12.564147 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46334: no serving certificate available for the kubelet" Feb 24 09:50:14 crc kubenswrapper[4822]: I0224 09:50:14.337419 4822 scope.go:117] "RemoveContainer" containerID="73d09a66304a086c367c8577265ad701c4cbf69329dfde07a6ab8a962f8b647d" Feb 24 09:50:14 crc kubenswrapper[4822]: E0224 09:50:14.337860 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:50:15 crc kubenswrapper[4822]: I0224 09:50:15.528731 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46344: no serving certificate available for the kubelet" Feb 24 09:50:15 crc kubenswrapper[4822]: I0224 09:50:15.612160 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46354: no serving certificate available for the kubelet" Feb 24 09:50:18 crc kubenswrapper[4822]: I0224 09:50:18.582967 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46364: no serving certificate available for the kubelet" Feb 24 09:50:18 crc kubenswrapper[4822]: I0224 09:50:18.666984 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46378: no serving certificate available for the kubelet" Feb 24 09:50:21 crc kubenswrapper[4822]: I0224 09:50:21.639628 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57348: no serving certificate available for the kubelet" Feb 24 09:50:21 crc kubenswrapper[4822]: I0224 09:50:21.714984 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57356: no serving certificate available for the kubelet" Feb 24 09:50:24 crc kubenswrapper[4822]: I0224 09:50:24.697232 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57364: no serving certificate available for the kubelet" Feb 24 09:50:24 crc kubenswrapper[4822]: I0224 09:50:24.766517 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57376: no serving certificate available for the kubelet" Feb 24 09:50:27 crc kubenswrapper[4822]: I0224 09:50:27.337734 4822 scope.go:117] "RemoveContainer" containerID="73d09a66304a086c367c8577265ad701c4cbf69329dfde07a6ab8a962f8b647d" Feb 24 09:50:27 crc kubenswrapper[4822]: E0224 09:50:27.338196 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:50:27 crc kubenswrapper[4822]: I0224 09:50:27.765027 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57388: no serving certificate available for the kubelet" Feb 24 09:50:27 crc kubenswrapper[4822]: I0224 09:50:27.829578 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57398: no serving certificate available for the kubelet" Feb 24 09:50:30 crc kubenswrapper[4822]: I0224 09:50:30.827866 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57414: no serving certificate available for the kubelet" Feb 24 09:50:30 crc kubenswrapper[4822]: I0224 09:50:30.881130 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57428: no serving certificate available for the kubelet" Feb 24 09:50:33 crc kubenswrapper[4822]: I0224 09:50:33.889324 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53114: no serving certificate available for the kubelet" Feb 24 09:50:33 crc kubenswrapper[4822]: I0224 09:50:33.938481 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53118: no serving certificate available for the kubelet" Feb 24 09:50:36 crc kubenswrapper[4822]: I0224 09:50:36.935540 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53124: no serving certificate available for the kubelet" Feb 24 09:50:36 crc kubenswrapper[4822]: I0224 09:50:36.984134 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53136: no serving certificate available for the kubelet" Feb 24 09:50:39 crc kubenswrapper[4822]: I0224 09:50:39.988052 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53138: no serving certificate available for the kubelet" Feb 24 09:50:40 crc kubenswrapper[4822]: I0224 09:50:40.089838 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53146: no serving certificate available for the kubelet" Feb 24 09:50:42 crc kubenswrapper[4822]: I0224 09:50:42.337520 4822 scope.go:117] "RemoveContainer" containerID="73d09a66304a086c367c8577265ad701c4cbf69329dfde07a6ab8a962f8b647d" Feb 24 09:50:42 crc kubenswrapper[4822]: E0224 09:50:42.338378 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:50:43 crc kubenswrapper[4822]: I0224 09:50:43.083663 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39372: no serving certificate available for the kubelet" Feb 24 09:50:43 crc kubenswrapper[4822]: I0224 09:50:43.155364 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39374: no serving certificate available for the kubelet" Feb 24 09:50:46 crc kubenswrapper[4822]: I0224 09:50:46.140155 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39386: no serving certificate available for the kubelet" Feb 24 09:50:46 crc kubenswrapper[4822]: I0224 09:50:46.217047 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39398: no serving certificate available for the kubelet" Feb 24 09:50:49 crc kubenswrapper[4822]: I0224 09:50:49.200187 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39402: no serving certificate available for the kubelet" Feb 24 09:50:49 crc kubenswrapper[4822]: I0224 09:50:49.266507 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39408: no serving certificate available for the kubelet" Feb 24 09:50:52 crc kubenswrapper[4822]: I0224 09:50:52.253158 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36112: no serving certificate available for the kubelet" Feb 24 09:50:52 crc kubenswrapper[4822]: I0224 09:50:52.319037 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36122: no serving certificate available for the kubelet" Feb 24 09:50:54 crc kubenswrapper[4822]: I0224 09:50:54.337764 4822 scope.go:117] "RemoveContainer" containerID="73d09a66304a086c367c8577265ad701c4cbf69329dfde07a6ab8a962f8b647d" Feb 24 09:50:54 crc kubenswrapper[4822]: E0224 09:50:54.338392 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:50:55 crc kubenswrapper[4822]: I0224 09:50:55.306960 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36130: no serving certificate available for the kubelet" Feb 24 09:50:55 crc kubenswrapper[4822]: I0224 09:50:55.377950 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36132: no serving certificate available for the kubelet" Feb 24 09:50:58 crc kubenswrapper[4822]: I0224 09:50:58.362805 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36144: no serving certificate available for the kubelet" Feb 24 09:50:58 crc kubenswrapper[4822]: I0224 09:50:58.422104 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36148: no serving certificate available for the kubelet" Feb 24 09:51:01 crc kubenswrapper[4822]: I0224 09:51:01.428397 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39662: no serving certificate available for the kubelet" Feb 24 09:51:01 crc kubenswrapper[4822]: I0224 09:51:01.484051 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39674: no serving certificate available for the kubelet" Feb 24 09:51:04 crc kubenswrapper[4822]: I0224 09:51:04.485015 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39676: no serving certificate available for the kubelet" Feb 24 09:51:04 crc kubenswrapper[4822]: I0224 09:51:04.542041 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39690: no serving certificate available for the kubelet" Feb 24 09:51:06 crc kubenswrapper[4822]: I0224 09:51:06.337519 4822 scope.go:117] "RemoveContainer" containerID="73d09a66304a086c367c8577265ad701c4cbf69329dfde07a6ab8a962f8b647d" Feb 24 09:51:06 crc kubenswrapper[4822]: E0224 09:51:06.337989 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:51:07 crc kubenswrapper[4822]: I0224 09:51:07.547604 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39702: no serving certificate available for the kubelet" Feb 24 09:51:07 crc kubenswrapper[4822]: I0224 09:51:07.602961 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39706: no serving certificate available for the kubelet" Feb 24 09:51:10 crc kubenswrapper[4822]: I0224 09:51:10.608046 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39722: no serving certificate available for the kubelet" Feb 24 09:51:10 crc kubenswrapper[4822]: I0224 09:51:10.697931 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39728: no serving certificate available for the kubelet" Feb 24 09:51:13 crc kubenswrapper[4822]: I0224 09:51:13.667948 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46284: no serving certificate available for the kubelet" Feb 24 09:51:13 crc kubenswrapper[4822]: I0224 09:51:13.758265 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46300: no serving certificate available for the kubelet" Feb 24 09:51:16 crc kubenswrapper[4822]: I0224 09:51:16.720376 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46304: no serving certificate available for the kubelet" Feb 24 09:51:16 crc kubenswrapper[4822]: I0224 09:51:16.807958 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46306: no serving certificate available for the kubelet" Feb 24 09:51:19 crc kubenswrapper[4822]: I0224 09:51:19.339963 4822 scope.go:117] "RemoveContainer" containerID="73d09a66304a086c367c8577265ad701c4cbf69329dfde07a6ab8a962f8b647d" Feb 24 09:51:19 crc kubenswrapper[4822]: E0224 09:51:19.340366 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:51:19 crc kubenswrapper[4822]: I0224 09:51:19.785453 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46314: no serving certificate available for the kubelet" Feb 24 09:51:19 crc kubenswrapper[4822]: I0224 09:51:19.871792 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46328: no serving certificate available for the kubelet" Feb 24 09:51:22 crc kubenswrapper[4822]: I0224 09:51:22.847505 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53132: no serving certificate available for the kubelet" Feb 24 09:51:22 crc kubenswrapper[4822]: I0224 09:51:22.929761 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53140: no serving certificate available for the kubelet" Feb 24 09:51:25 crc kubenswrapper[4822]: I0224 09:51:25.892468 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53142: no serving certificate available for the kubelet" Feb 24 09:51:26 crc kubenswrapper[4822]: I0224 09:51:26.014177 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53144: no serving certificate available for the kubelet" Feb 24 09:51:28 crc kubenswrapper[4822]: I0224 09:51:28.960871 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53156: no serving certificate available for the kubelet" Feb 24 09:51:29 crc kubenswrapper[4822]: I0224 09:51:29.061824 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53162: no serving certificate available for the kubelet" Feb 24 09:51:31 crc kubenswrapper[4822]: I0224 09:51:31.338176 4822 scope.go:117] "RemoveContainer" containerID="73d09a66304a086c367c8577265ad701c4cbf69329dfde07a6ab8a962f8b647d" Feb 24 09:51:31 crc kubenswrapper[4822]: E0224 09:51:31.339077 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:51:32 crc kubenswrapper[4822]: I0224 09:51:32.019235 4822 ???:1] "http: TLS handshake error from 192.168.126.11:56546: no serving certificate available for the kubelet" Feb 24 09:51:32 crc kubenswrapper[4822]: I0224 09:51:32.106097 4822 ???:1] "http: TLS handshake error from 192.168.126.11:56560: no serving certificate available for the kubelet" Feb 24 09:51:35 crc kubenswrapper[4822]: I0224 09:51:35.075510 4822 ???:1] "http: TLS handshake error from 192.168.126.11:56570: no serving certificate available for the kubelet" Feb 24 09:51:35 crc kubenswrapper[4822]: I0224 09:51:35.152415 4822 ???:1] "http: TLS handshake error from 192.168.126.11:56578: no serving certificate available for the kubelet" Feb 24 09:51:38 crc kubenswrapper[4822]: I0224 09:51:38.132614 4822 ???:1] "http: TLS handshake error from 192.168.126.11:56580: no serving certificate available for the kubelet" Feb 24 09:51:38 crc kubenswrapper[4822]: I0224 09:51:38.209681 4822 ???:1] "http: TLS handshake error from 192.168.126.11:56596: no serving certificate available for the kubelet" Feb 24 09:51:41 crc kubenswrapper[4822]: I0224 09:51:41.190151 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46536: no serving certificate available for the kubelet" Feb 24 09:51:41 crc kubenswrapper[4822]: I0224 09:51:41.263661 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46544: no serving certificate available for the kubelet" Feb 24 09:51:44 crc kubenswrapper[4822]: I0224 09:51:44.249115 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46560: no serving certificate available for the kubelet" Feb 24 09:51:44 crc kubenswrapper[4822]: I0224 09:51:44.313536 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46570: no serving certificate available for the kubelet" Feb 24 09:51:44 crc kubenswrapper[4822]: I0224 09:51:44.338612 4822 scope.go:117] "RemoveContainer" containerID="73d09a66304a086c367c8577265ad701c4cbf69329dfde07a6ab8a962f8b647d" Feb 24 09:51:44 crc kubenswrapper[4822]: E0224 09:51:44.339706 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:51:44 crc kubenswrapper[4822]: I0224 09:51:44.823195 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46584: no serving certificate available for the kubelet" Feb 24 09:51:47 crc kubenswrapper[4822]: I0224 09:51:47.304209 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46588: no serving certificate available for the kubelet" Feb 24 09:51:47 crc kubenswrapper[4822]: I0224 09:51:47.372469 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46594: no serving certificate available for the kubelet" Feb 24 09:51:50 crc kubenswrapper[4822]: I0224 09:51:50.347500 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46604: no serving certificate available for the kubelet" Feb 24 09:51:50 crc kubenswrapper[4822]: I0224 09:51:50.422011 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46606: no serving certificate available for the kubelet" Feb 24 09:51:53 crc kubenswrapper[4822]: I0224 09:51:53.414202 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46678: no serving certificate available for the kubelet" Feb 24 09:51:53 crc kubenswrapper[4822]: I0224 09:51:53.487853 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46692: no serving certificate available for the kubelet" Feb 24 09:51:56 crc kubenswrapper[4822]: I0224 09:51:56.497666 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46696: no serving certificate available for the kubelet" Feb 24 09:51:56 crc kubenswrapper[4822]: I0224 09:51:56.555796 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46708: no serving certificate available for the kubelet" Feb 24 09:51:58 crc kubenswrapper[4822]: I0224 09:51:58.347127 4822 scope.go:117] "RemoveContainer" containerID="73d09a66304a086c367c8577265ad701c4cbf69329dfde07a6ab8a962f8b647d" Feb 24 09:51:58 crc kubenswrapper[4822]: I0224 09:51:58.721057 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" event={"ID":"306aba52-0b6e-4d3f-b05f-757daebc5e24","Type":"ContainerStarted","Data":"e2d0641ae6d6ef869445847b5fa7176d9593dd9b4a97f7de961c91199df6dc6b"} Feb 24 09:51:59 crc kubenswrapper[4822]: I0224 09:51:59.560151 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46710: no serving certificate available for the kubelet" Feb 24 09:51:59 crc kubenswrapper[4822]: I0224 09:51:59.618228 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46712: no serving certificate available for the kubelet" Feb 24 09:52:02 crc kubenswrapper[4822]: I0224 09:52:02.618518 4822 ???:1] "http: TLS handshake error from 192.168.126.11:51176: no serving certificate available for the kubelet" Feb 24 09:52:02 crc kubenswrapper[4822]: I0224 09:52:02.675544 4822 ???:1] "http: TLS handshake error from 192.168.126.11:51178: no serving certificate available for the kubelet" Feb 24 09:52:05 crc kubenswrapper[4822]: I0224 09:52:05.672056 4822 ???:1] "http: TLS handshake error from 192.168.126.11:51182: no serving certificate available for the kubelet" Feb 24 09:52:05 crc kubenswrapper[4822]: I0224 09:52:05.731107 4822 ???:1] "http: TLS handshake error from 192.168.126.11:51194: no serving certificate available for the kubelet" Feb 24 09:52:08 crc kubenswrapper[4822]: I0224 09:52:08.732359 4822 ???:1] "http: TLS handshake error from 192.168.126.11:51210: no serving certificate available for the kubelet" Feb 24 09:52:08 crc kubenswrapper[4822]: I0224 09:52:08.824652 4822 ???:1] "http: TLS handshake error from 192.168.126.11:51222: no serving certificate available for the kubelet" Feb 24 09:52:11 crc kubenswrapper[4822]: I0224 09:52:11.798667 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58756: no serving certificate available for the kubelet" Feb 24 09:52:11 crc kubenswrapper[4822]: I0224 09:52:11.892136 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58766: no serving certificate available for the kubelet" Feb 24 09:52:14 crc kubenswrapper[4822]: I0224 09:52:14.858712 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58774: no serving certificate available for the kubelet" Feb 24 09:52:14 crc kubenswrapper[4822]: I0224 09:52:14.944787 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58778: no serving certificate available for the kubelet" Feb 24 09:52:17 crc kubenswrapper[4822]: I0224 09:52:17.923009 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58786: no serving certificate available for the kubelet" Feb 24 09:52:17 crc kubenswrapper[4822]: I0224 09:52:17.977811 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58798: no serving certificate available for the kubelet" Feb 24 09:52:20 crc kubenswrapper[4822]: I0224 09:52:20.977463 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58802: no serving certificate available for the kubelet" Feb 24 09:52:21 crc kubenswrapper[4822]: I0224 09:52:21.032579 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58806: no serving certificate available for the kubelet" Feb 24 09:52:24 crc kubenswrapper[4822]: I0224 09:52:24.032272 4822 ???:1] "http: TLS handshake error from 192.168.126.11:41558: no serving certificate available for the kubelet" Feb 24 09:52:24 crc kubenswrapper[4822]: I0224 09:52:24.126789 4822 ???:1] "http: TLS handshake error from 192.168.126.11:41574: no serving certificate available for the kubelet" Feb 24 09:52:27 crc kubenswrapper[4822]: I0224 09:52:27.086608 4822 ???:1] "http: TLS handshake error from 192.168.126.11:41582: no serving certificate available for the kubelet" Feb 24 09:52:27 crc kubenswrapper[4822]: I0224 09:52:27.183581 4822 ???:1] "http: TLS handshake error from 192.168.126.11:41588: no serving certificate available for the kubelet" Feb 24 09:52:30 crc kubenswrapper[4822]: I0224 09:52:30.142464 4822 ???:1] "http: TLS handshake error from 192.168.126.11:41604: no serving certificate available for the kubelet" Feb 24 09:52:30 crc kubenswrapper[4822]: I0224 09:52:30.243737 4822 ???:1] "http: TLS handshake error from 192.168.126.11:41608: no serving certificate available for the kubelet" Feb 24 09:52:33 crc kubenswrapper[4822]: I0224 09:52:33.197814 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35232: no serving certificate available for the kubelet" Feb 24 09:52:33 crc kubenswrapper[4822]: I0224 09:52:33.303696 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35246: no serving certificate available for the kubelet" Feb 24 09:52:36 crc kubenswrapper[4822]: I0224 09:52:36.252103 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35258: no serving certificate available for the kubelet" Feb 24 09:52:36 crc kubenswrapper[4822]: I0224 09:52:36.350443 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35272: no serving certificate available for the kubelet" Feb 24 09:52:39 crc kubenswrapper[4822]: I0224 09:52:39.306342 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35274: no serving certificate available for the kubelet" Feb 24 09:52:39 crc kubenswrapper[4822]: I0224 09:52:39.394761 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35284: no serving certificate available for the kubelet" Feb 24 09:52:42 crc kubenswrapper[4822]: I0224 09:52:42.352678 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45398: no serving certificate available for the kubelet" Feb 24 09:52:42 crc kubenswrapper[4822]: I0224 09:52:42.450986 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45400: no serving certificate available for the kubelet" Feb 24 09:52:45 crc kubenswrapper[4822]: I0224 09:52:45.409449 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45404: no serving certificate available for the kubelet" Feb 24 09:52:45 crc kubenswrapper[4822]: I0224 09:52:45.503342 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45406: no serving certificate available for the kubelet" Feb 24 09:52:48 crc kubenswrapper[4822]: I0224 09:52:48.465497 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45414: no serving certificate available for the kubelet" Feb 24 09:52:48 crc kubenswrapper[4822]: I0224 09:52:48.557785 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45422: no serving certificate available for the kubelet" Feb 24 09:52:51 crc kubenswrapper[4822]: I0224 09:52:51.600351 4822 ???:1] "http: TLS handshake error from 192.168.126.11:55466: no serving certificate available for the kubelet" Feb 24 09:52:51 crc kubenswrapper[4822]: I0224 09:52:51.658024 4822 ???:1] "http: TLS handshake error from 192.168.126.11:55476: no serving certificate available for the kubelet" Feb 24 09:52:54 crc kubenswrapper[4822]: I0224 09:52:54.661248 4822 ???:1] "http: TLS handshake error from 192.168.126.11:55488: no serving certificate available for the kubelet" Feb 24 09:52:54 crc kubenswrapper[4822]: I0224 09:52:54.719582 4822 ???:1] "http: TLS handshake error from 192.168.126.11:55498: no serving certificate available for the kubelet" Feb 24 09:52:57 crc kubenswrapper[4822]: I0224 09:52:57.718508 4822 ???:1] "http: TLS handshake error from 192.168.126.11:55508: no serving certificate available for the kubelet" Feb 24 09:52:57 crc kubenswrapper[4822]: I0224 09:52:57.777442 4822 ???:1] "http: TLS handshake error from 192.168.126.11:55516: no serving certificate available for the kubelet" Feb 24 09:53:00 crc kubenswrapper[4822]: I0224 09:53:00.777539 4822 ???:1] "http: TLS handshake error from 192.168.126.11:55524: no serving certificate available for the kubelet" Feb 24 09:53:00 crc kubenswrapper[4822]: I0224 09:53:00.840716 4822 ???:1] "http: TLS handshake error from 192.168.126.11:55530: no serving certificate available for the kubelet" Feb 24 09:53:03 crc kubenswrapper[4822]: I0224 09:53:03.826735 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45438: no serving certificate available for the kubelet" Feb 24 09:53:03 crc kubenswrapper[4822]: I0224 09:53:03.919997 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45444: no serving certificate available for the kubelet" Feb 24 09:53:06 crc kubenswrapper[4822]: I0224 09:53:06.894256 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45456: no serving certificate available for the kubelet" Feb 24 09:53:06 crc kubenswrapper[4822]: I0224 09:53:06.981514 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45464: no serving certificate available for the kubelet" Feb 24 09:53:09 crc kubenswrapper[4822]: I0224 09:53:09.945858 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45480: no serving certificate available for the kubelet" Feb 24 09:53:10 crc kubenswrapper[4822]: I0224 09:53:10.040366 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45488: no serving certificate available for the kubelet" Feb 24 09:53:13 crc kubenswrapper[4822]: I0224 09:53:13.002118 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46580: no serving certificate available for the kubelet" Feb 24 09:53:13 crc kubenswrapper[4822]: I0224 09:53:13.095463 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46586: no serving certificate available for the kubelet" Feb 24 09:53:16 crc kubenswrapper[4822]: I0224 09:53:16.068149 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46588: no serving certificate available for the kubelet" Feb 24 09:53:16 crc kubenswrapper[4822]: I0224 09:53:16.143977 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46602: no serving certificate available for the kubelet" Feb 24 09:53:19 crc kubenswrapper[4822]: I0224 09:53:19.124452 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46612: no serving certificate available for the kubelet" Feb 24 09:53:19 crc kubenswrapper[4822]: I0224 09:53:19.200140 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46622: no serving certificate available for the kubelet" Feb 24 09:53:22 crc kubenswrapper[4822]: I0224 09:53:22.183551 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40110: no serving certificate available for the kubelet" Feb 24 09:53:22 crc kubenswrapper[4822]: I0224 09:53:22.253845 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40122: no serving certificate available for the kubelet" Feb 24 09:53:25 crc kubenswrapper[4822]: I0224 09:53:25.247452 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40130: no serving certificate available for the kubelet" Feb 24 09:53:25 crc kubenswrapper[4822]: I0224 09:53:25.308367 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40142: no serving certificate available for the kubelet" Feb 24 09:53:28 crc kubenswrapper[4822]: I0224 09:53:28.335659 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40150: no serving certificate available for the kubelet" Feb 24 09:53:28 crc kubenswrapper[4822]: I0224 09:53:28.386878 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40162: no serving certificate available for the kubelet" Feb 24 09:53:31 crc kubenswrapper[4822]: I0224 09:53:31.400110 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45600: no serving certificate available for the kubelet" Feb 24 09:53:31 crc kubenswrapper[4822]: I0224 09:53:31.452483 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45608: no serving certificate available for the kubelet" Feb 24 09:53:34 crc kubenswrapper[4822]: I0224 09:53:34.451075 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45620: no serving certificate available for the kubelet" Feb 24 09:53:34 crc kubenswrapper[4822]: I0224 09:53:34.496332 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45634: no serving certificate available for the kubelet" Feb 24 09:53:37 crc kubenswrapper[4822]: I0224 09:53:37.512977 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45646: no serving certificate available for the kubelet" Feb 24 09:53:37 crc kubenswrapper[4822]: I0224 09:53:37.564164 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45656: no serving certificate available for the kubelet" Feb 24 09:53:40 crc kubenswrapper[4822]: I0224 09:53:40.579287 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45662: no serving certificate available for the kubelet" Feb 24 09:53:40 crc kubenswrapper[4822]: I0224 09:53:40.633440 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45670: no serving certificate available for the kubelet" Feb 24 09:53:43 crc kubenswrapper[4822]: I0224 09:53:43.647469 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33172: no serving certificate available for the kubelet" Feb 24 09:53:43 crc kubenswrapper[4822]: I0224 09:53:43.733300 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33186: no serving certificate available for the kubelet" Feb 24 09:53:46 crc kubenswrapper[4822]: I0224 09:53:46.702869 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33196: no serving certificate available for the kubelet" Feb 24 09:53:46 crc kubenswrapper[4822]: I0224 09:53:46.782615 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33198: no serving certificate available for the kubelet" Feb 24 09:53:49 crc kubenswrapper[4822]: I0224 09:53:49.758310 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33204: no serving certificate available for the kubelet" Feb 24 09:53:49 crc kubenswrapper[4822]: I0224 09:53:49.848178 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33218: no serving certificate available for the kubelet" Feb 24 09:53:52 crc kubenswrapper[4822]: I0224 09:53:52.834365 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33538: no serving certificate available for the kubelet" Feb 24 09:53:52 crc kubenswrapper[4822]: I0224 09:53:52.904494 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33548: no serving certificate available for the kubelet" Feb 24 09:53:55 crc kubenswrapper[4822]: I0224 09:53:55.897218 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33554: no serving certificate available for the kubelet" Feb 24 09:53:55 crc kubenswrapper[4822]: I0224 09:53:55.958498 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33568: no serving certificate available for the kubelet" Feb 24 09:53:58 crc kubenswrapper[4822]: I0224 09:53:58.953173 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33578: no serving certificate available for the kubelet" Feb 24 09:53:59 crc kubenswrapper[4822]: I0224 09:53:59.006112 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33580: no serving certificate available for the kubelet" Feb 24 09:54:01 crc kubenswrapper[4822]: I0224 09:54:01.990234 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49210: no serving certificate available for the kubelet" Feb 24 09:54:02 crc kubenswrapper[4822]: I0224 09:54:02.059925 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49214: no serving certificate available for the kubelet" Feb 24 09:54:04 crc kubenswrapper[4822]: I0224 09:54:04.721458 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/openstack-cell1-galera-0" podUID="3ff049ae-9abb-4477-9f51-eee7228cedfd" containerName="galera" probeResult="failure" output=< Feb 24 09:54:04 crc kubenswrapper[4822]: waiting for gcomm URI Feb 24 09:54:04 crc kubenswrapper[4822]: > Feb 24 09:54:04 crc kubenswrapper[4822]: I0224 09:54:04.721867 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 24 09:54:04 crc kubenswrapper[4822]: I0224 09:54:04.722660 4822 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"e8262553ae45566fa847c362fce596c8e0e36bcbf4f34da176d8086e02e19351"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed startup probe, will be restarted" Feb 24 09:54:04 crc kubenswrapper[4822]: I0224 09:54:04.790529 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="3ff049ae-9abb-4477-9f51-eee7228cedfd" containerName="galera" containerID="cri-o://e8262553ae45566fa847c362fce596c8e0e36bcbf4f34da176d8086e02e19351" gracePeriod=30 Feb 24 09:54:04 crc kubenswrapper[4822]: I0224 09:54:04.962946 4822 generic.go:334] "Generic (PLEG): container finished" podID="3ff049ae-9abb-4477-9f51-eee7228cedfd" containerID="e8262553ae45566fa847c362fce596c8e0e36bcbf4f34da176d8086e02e19351" exitCode=143 Feb 24 09:54:04 crc kubenswrapper[4822]: I0224 09:54:04.962962 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3ff049ae-9abb-4477-9f51-eee7228cedfd","Type":"ContainerDied","Data":"e8262553ae45566fa847c362fce596c8e0e36bcbf4f34da176d8086e02e19351"} Feb 24 09:54:04 crc kubenswrapper[4822]: I0224 09:54:04.963039 4822 scope.go:117] "RemoveContainer" containerID="74f3cbe618e9630145cc19ece8102172932f857f6fa57e95aa6b9ffa71795cd6" Feb 24 09:54:05 crc kubenswrapper[4822]: I0224 09:54:05.042989 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49228: no serving certificate available for the kubelet" Feb 24 09:54:05 crc kubenswrapper[4822]: E0224 09:54:05.050825 4822 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ff049ae_9abb_4477_9f51_eee7228cedfd.slice/crio-conmon-e8262553ae45566fa847c362fce596c8e0e36bcbf4f34da176d8086e02e19351.scope\": RecentStats: unable to find data in memory cache]" Feb 24 09:54:05 crc kubenswrapper[4822]: I0224 09:54:05.105764 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49240: no serving certificate available for the kubelet" Feb 24 09:54:05 crc kubenswrapper[4822]: I0224 09:54:05.973720 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3ff049ae-9abb-4477-9f51-eee7228cedfd","Type":"ContainerStarted","Data":"7536a9f101dbd2bac88d83679be37627b03c6e965a1c67c7376c010e4551efbc"} Feb 24 09:54:08 crc kubenswrapper[4822]: I0224 09:54:08.081829 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49254: no serving certificate available for the kubelet" Feb 24 09:54:08 crc kubenswrapper[4822]: I0224 09:54:08.145396 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49260: no serving certificate available for the kubelet" Feb 24 09:54:11 crc kubenswrapper[4822]: I0224 09:54:11.141186 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45698: no serving certificate available for the kubelet" Feb 24 09:54:11 crc kubenswrapper[4822]: I0224 09:54:11.164118 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/openstack-galera-0" podUID="92cdd045-d60c-433d-b2e3-32f93299ee8e" containerName="galera" probeResult="failure" output=< Feb 24 09:54:11 crc kubenswrapper[4822]: waiting for gcomm URI Feb 24 09:54:11 crc kubenswrapper[4822]: > Feb 24 09:54:11 crc kubenswrapper[4822]: I0224 09:54:11.164240 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 24 09:54:11 crc kubenswrapper[4822]: I0224 09:54:11.165307 4822 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"a2afc6862e9196c8b25ff4a3dadf65069e66cf79d0505828a5b055a7d9ff368e"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed startup probe, will be restarted" Feb 24 09:54:11 crc kubenswrapper[4822]: I0224 09:54:11.205470 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45704: no serving certificate available for the kubelet" Feb 24 09:54:11 crc kubenswrapper[4822]: I0224 09:54:11.258831 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="92cdd045-d60c-433d-b2e3-32f93299ee8e" containerName="galera" containerID="cri-o://a2afc6862e9196c8b25ff4a3dadf65069e66cf79d0505828a5b055a7d9ff368e" gracePeriod=30 Feb 24 09:54:12 crc kubenswrapper[4822]: I0224 09:54:12.036650 4822 generic.go:334] "Generic (PLEG): container finished" podID="92cdd045-d60c-433d-b2e3-32f93299ee8e" containerID="a2afc6862e9196c8b25ff4a3dadf65069e66cf79d0505828a5b055a7d9ff368e" exitCode=143 Feb 24 09:54:12 crc kubenswrapper[4822]: I0224 09:54:12.036794 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"92cdd045-d60c-433d-b2e3-32f93299ee8e","Type":"ContainerDied","Data":"a2afc6862e9196c8b25ff4a3dadf65069e66cf79d0505828a5b055a7d9ff368e"} Feb 24 09:54:12 crc kubenswrapper[4822]: I0224 09:54:12.038041 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"92cdd045-d60c-433d-b2e3-32f93299ee8e","Type":"ContainerStarted","Data":"9096168b79cb1ca2340175113d8fd48922513f5c3fc8b043f568f158dcad3412"} Feb 24 09:54:12 crc kubenswrapper[4822]: I0224 09:54:12.038066 4822 scope.go:117] "RemoveContainer" containerID="a62ac07f03c3cf24264a4baf203c891cea64e1dc745db8ec0d97bb7e8a039dd1" Feb 24 09:54:13 crc kubenswrapper[4822]: I0224 09:54:13.057187 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 24 09:54:13 crc kubenswrapper[4822]: I0224 09:54:13.058228 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 24 09:54:14 crc kubenswrapper[4822]: I0224 09:54:14.222558 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45710: no serving certificate available for the kubelet" Feb 24 09:54:14 crc kubenswrapper[4822]: I0224 09:54:14.281060 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45716: no serving certificate available for the kubelet" Feb 24 09:54:15 crc kubenswrapper[4822]: I0224 09:54:15.676897 4822 patch_prober.go:28] interesting pod/machine-config-daemon-qd752 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:54:15 crc kubenswrapper[4822]: I0224 09:54:15.677407 4822 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:54:17 crc kubenswrapper[4822]: I0224 09:54:17.301009 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45722: no serving certificate available for the kubelet" Feb 24 09:54:17 crc kubenswrapper[4822]: I0224 09:54:17.357567 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45724: no serving certificate available for the kubelet" Feb 24 09:54:20 crc kubenswrapper[4822]: I0224 09:54:20.359215 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45730: no serving certificate available for the kubelet" Feb 24 09:54:20 crc kubenswrapper[4822]: I0224 09:54:20.420856 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45740: no serving certificate available for the kubelet" Feb 24 09:54:21 crc kubenswrapper[4822]: I0224 09:54:21.616212 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 24 09:54:21 crc kubenswrapper[4822]: I0224 09:54:21.616556 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 24 09:54:23 crc kubenswrapper[4822]: I0224 09:54:23.409993 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42050: no serving certificate available for the kubelet" Feb 24 09:54:23 crc kubenswrapper[4822]: I0224 09:54:23.480956 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42064: no serving certificate available for the kubelet" Feb 24 09:54:26 crc kubenswrapper[4822]: I0224 09:54:26.455329 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42072: no serving certificate available for the kubelet" Feb 24 09:54:26 crc kubenswrapper[4822]: I0224 09:54:26.520888 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42074: no serving certificate available for the kubelet" Feb 24 09:54:29 crc kubenswrapper[4822]: I0224 09:54:29.499225 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42084: no serving certificate available for the kubelet" Feb 24 09:54:29 crc kubenswrapper[4822]: I0224 09:54:29.577183 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42100: no serving certificate available for the kubelet" Feb 24 09:54:32 crc kubenswrapper[4822]: I0224 09:54:32.541165 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52110: no serving certificate available for the kubelet" Feb 24 09:54:32 crc kubenswrapper[4822]: I0224 09:54:32.615785 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52126: no serving certificate available for the kubelet" Feb 24 09:54:35 crc kubenswrapper[4822]: I0224 09:54:35.599375 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52140: no serving certificate available for the kubelet" Feb 24 09:54:35 crc kubenswrapper[4822]: I0224 09:54:35.671599 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52144: no serving certificate available for the kubelet" Feb 24 09:54:38 crc kubenswrapper[4822]: I0224 09:54:38.660032 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52148: no serving certificate available for the kubelet" Feb 24 09:54:38 crc kubenswrapper[4822]: I0224 09:54:38.732668 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52152: no serving certificate available for the kubelet" Feb 24 09:54:41 crc kubenswrapper[4822]: I0224 09:54:41.712226 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33456: no serving certificate available for the kubelet" Feb 24 09:54:41 crc kubenswrapper[4822]: I0224 09:54:41.791467 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33470: no serving certificate available for the kubelet" Feb 24 09:54:44 crc kubenswrapper[4822]: I0224 09:54:44.767992 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33478: no serving certificate available for the kubelet" Feb 24 09:54:44 crc kubenswrapper[4822]: I0224 09:54:44.921445 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33482: no serving certificate available for the kubelet" Feb 24 09:54:45 crc kubenswrapper[4822]: I0224 09:54:45.677281 4822 patch_prober.go:28] interesting pod/machine-config-daemon-qd752 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:54:45 crc kubenswrapper[4822]: I0224 09:54:45.677367 4822 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:54:47 crc kubenswrapper[4822]: I0224 09:54:47.826112 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33494: no serving certificate available for the kubelet" Feb 24 09:54:47 crc kubenswrapper[4822]: I0224 09:54:47.985508 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33498: no serving certificate available for the kubelet" Feb 24 09:54:50 crc kubenswrapper[4822]: I0224 09:54:50.887236 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33510: no serving certificate available for the kubelet" Feb 24 09:54:51 crc kubenswrapper[4822]: I0224 09:54:51.044079 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33518: no serving certificate available for the kubelet" Feb 24 09:54:53 crc kubenswrapper[4822]: I0224 09:54:53.946325 4822 ???:1] "http: TLS handshake error from 192.168.126.11:51554: no serving certificate available for the kubelet" Feb 24 09:54:54 crc kubenswrapper[4822]: I0224 09:54:54.090796 4822 ???:1] "http: TLS handshake error from 192.168.126.11:51568: no serving certificate available for the kubelet" Feb 24 09:54:57 crc kubenswrapper[4822]: I0224 09:54:57.035734 4822 ???:1] "http: TLS handshake error from 192.168.126.11:51572: no serving certificate available for the kubelet" Feb 24 09:54:57 crc kubenswrapper[4822]: I0224 09:54:57.141216 4822 ???:1] "http: TLS handshake error from 192.168.126.11:51580: no serving certificate available for the kubelet" Feb 24 09:55:00 crc kubenswrapper[4822]: I0224 09:55:00.104111 4822 ???:1] "http: TLS handshake error from 192.168.126.11:51588: no serving certificate available for the kubelet" Feb 24 09:55:00 crc kubenswrapper[4822]: I0224 09:55:00.186229 4822 ???:1] "http: TLS handshake error from 192.168.126.11:51600: no serving certificate available for the kubelet" Feb 24 09:55:03 crc kubenswrapper[4822]: I0224 09:55:03.160376 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36346: no serving certificate available for the kubelet" Feb 24 09:55:03 crc kubenswrapper[4822]: I0224 09:55:03.242639 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36350: no serving certificate available for the kubelet" Feb 24 09:55:06 crc kubenswrapper[4822]: I0224 09:55:06.212738 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36364: no serving certificate available for the kubelet" Feb 24 09:55:06 crc kubenswrapper[4822]: I0224 09:55:06.278375 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36370: no serving certificate available for the kubelet" Feb 24 09:55:09 crc kubenswrapper[4822]: I0224 09:55:09.279000 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36376: no serving certificate available for the kubelet" Feb 24 09:55:09 crc kubenswrapper[4822]: I0224 09:55:09.330083 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36386: no serving certificate available for the kubelet" Feb 24 09:55:12 crc kubenswrapper[4822]: I0224 09:55:12.340854 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58298: no serving certificate available for the kubelet" Feb 24 09:55:12 crc kubenswrapper[4822]: I0224 09:55:12.417122 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58304: no serving certificate available for the kubelet" Feb 24 09:55:15 crc kubenswrapper[4822]: I0224 09:55:15.390780 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58320: no serving certificate available for the kubelet" Feb 24 09:55:15 crc kubenswrapper[4822]: I0224 09:55:15.480766 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58330: no serving certificate available for the kubelet" Feb 24 09:55:15 crc kubenswrapper[4822]: I0224 09:55:15.676704 4822 patch_prober.go:28] interesting pod/machine-config-daemon-qd752 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:55:15 crc kubenswrapper[4822]: I0224 09:55:15.676792 4822 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:55:15 crc kubenswrapper[4822]: I0224 09:55:15.676868 4822 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd752" Feb 24 09:55:15 crc kubenswrapper[4822]: I0224 09:55:15.677944 4822 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e2d0641ae6d6ef869445847b5fa7176d9593dd9b4a97f7de961c91199df6dc6b"} pod="openshift-machine-config-operator/machine-config-daemon-qd752" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 09:55:15 crc kubenswrapper[4822]: I0224 09:55:15.678081 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" containerID="cri-o://e2d0641ae6d6ef869445847b5fa7176d9593dd9b4a97f7de961c91199df6dc6b" gracePeriod=600 Feb 24 09:55:16 crc kubenswrapper[4822]: I0224 09:55:16.678000 4822 generic.go:334] "Generic (PLEG): container finished" podID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerID="e2d0641ae6d6ef869445847b5fa7176d9593dd9b4a97f7de961c91199df6dc6b" exitCode=0 Feb 24 09:55:16 crc kubenswrapper[4822]: I0224 09:55:16.678094 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" event={"ID":"306aba52-0b6e-4d3f-b05f-757daebc5e24","Type":"ContainerDied","Data":"e2d0641ae6d6ef869445847b5fa7176d9593dd9b4a97f7de961c91199df6dc6b"} Feb 24 09:55:16 crc kubenswrapper[4822]: I0224 09:55:16.678383 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" event={"ID":"306aba52-0b6e-4d3f-b05f-757daebc5e24","Type":"ContainerStarted","Data":"0cdf73a07b30b9289a24f8a41d923624e54e5740a1f0b1a8a379fbe0eb826e01"} Feb 24 09:55:16 crc kubenswrapper[4822]: I0224 09:55:16.678414 4822 scope.go:117] "RemoveContainer" containerID="73d09a66304a086c367c8577265ad701c4cbf69329dfde07a6ab8a962f8b647d" Feb 24 09:55:18 crc kubenswrapper[4822]: I0224 09:55:18.448218 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58336: no serving certificate available for the kubelet" Feb 24 09:55:18 crc kubenswrapper[4822]: I0224 09:55:18.532110 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58348: no serving certificate available for the kubelet" Feb 24 09:55:21 crc kubenswrapper[4822]: I0224 09:55:21.495610 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53510: no serving certificate available for the kubelet" Feb 24 09:55:21 crc kubenswrapper[4822]: I0224 09:55:21.577485 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53522: no serving certificate available for the kubelet" Feb 24 09:55:24 crc kubenswrapper[4822]: I0224 09:55:24.578664 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53538: no serving certificate available for the kubelet" Feb 24 09:55:24 crc kubenswrapper[4822]: I0224 09:55:24.653051 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53542: no serving certificate available for the kubelet" Feb 24 09:55:27 crc kubenswrapper[4822]: I0224 09:55:27.627610 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53556: no serving certificate available for the kubelet" Feb 24 09:55:27 crc kubenswrapper[4822]: I0224 09:55:27.709731 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53572: no serving certificate available for the kubelet" Feb 24 09:55:30 crc kubenswrapper[4822]: I0224 09:55:30.672701 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53586: no serving certificate available for the kubelet" Feb 24 09:55:30 crc kubenswrapper[4822]: I0224 09:55:30.747231 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53588: no serving certificate available for the kubelet" Feb 24 09:55:33 crc kubenswrapper[4822]: I0224 09:55:33.732374 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57508: no serving certificate available for the kubelet" Feb 24 09:55:33 crc kubenswrapper[4822]: I0224 09:55:33.783762 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57518: no serving certificate available for the kubelet" Feb 24 09:55:34 crc kubenswrapper[4822]: I0224 09:55:34.941234 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6jz7r/must-gather-bhc9h"] Feb 24 09:55:34 crc kubenswrapper[4822]: E0224 09:55:34.942082 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="836e2fc9-8245-4af3-bea3-cd599aba611a" containerName="extract-content" Feb 24 09:55:34 crc kubenswrapper[4822]: I0224 09:55:34.942103 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="836e2fc9-8245-4af3-bea3-cd599aba611a" containerName="extract-content" Feb 24 09:55:34 crc kubenswrapper[4822]: E0224 09:55:34.942122 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="836e2fc9-8245-4af3-bea3-cd599aba611a" containerName="extract-utilities" Feb 24 09:55:34 crc kubenswrapper[4822]: I0224 09:55:34.942131 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="836e2fc9-8245-4af3-bea3-cd599aba611a" containerName="extract-utilities" Feb 24 09:55:34 crc kubenswrapper[4822]: E0224 09:55:34.942146 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="836e2fc9-8245-4af3-bea3-cd599aba611a" containerName="registry-server" Feb 24 09:55:34 crc kubenswrapper[4822]: I0224 09:55:34.942156 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="836e2fc9-8245-4af3-bea3-cd599aba611a" containerName="registry-server" Feb 24 09:55:34 crc kubenswrapper[4822]: I0224 09:55:34.942354 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="836e2fc9-8245-4af3-bea3-cd599aba611a" containerName="registry-server" Feb 24 09:55:34 crc kubenswrapper[4822]: I0224 09:55:34.943546 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6jz7r/must-gather-bhc9h" Feb 24 09:55:34 crc kubenswrapper[4822]: I0224 09:55:34.947391 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-6jz7r"/"openshift-service-ca.crt" Feb 24 09:55:34 crc kubenswrapper[4822]: I0224 09:55:34.947495 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-6jz7r"/"default-dockercfg-2v4zv" Feb 24 09:55:34 crc kubenswrapper[4822]: I0224 09:55:34.947859 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-6jz7r"/"kube-root-ca.crt" Feb 24 09:55:34 crc kubenswrapper[4822]: I0224 09:55:34.960072 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6jz7r/must-gather-bhc9h"] Feb 24 09:55:35 crc kubenswrapper[4822]: I0224 09:55:35.035730 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/244fb64f-3d89-480f-b297-abc7a1b5a448-must-gather-output\") pod \"must-gather-bhc9h\" (UID: \"244fb64f-3d89-480f-b297-abc7a1b5a448\") " pod="openshift-must-gather-6jz7r/must-gather-bhc9h" Feb 24 09:55:35 crc kubenswrapper[4822]: I0224 09:55:35.035935 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqcwv\" (UniqueName: \"kubernetes.io/projected/244fb64f-3d89-480f-b297-abc7a1b5a448-kube-api-access-mqcwv\") pod \"must-gather-bhc9h\" (UID: \"244fb64f-3d89-480f-b297-abc7a1b5a448\") " pod="openshift-must-gather-6jz7r/must-gather-bhc9h" Feb 24 09:55:35 crc kubenswrapper[4822]: I0224 09:55:35.136866 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqcwv\" (UniqueName: \"kubernetes.io/projected/244fb64f-3d89-480f-b297-abc7a1b5a448-kube-api-access-mqcwv\") pod \"must-gather-bhc9h\" (UID: \"244fb64f-3d89-480f-b297-abc7a1b5a448\") " pod="openshift-must-gather-6jz7r/must-gather-bhc9h" Feb 24 09:55:35 crc kubenswrapper[4822]: I0224 09:55:35.136988 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/244fb64f-3d89-480f-b297-abc7a1b5a448-must-gather-output\") pod \"must-gather-bhc9h\" (UID: \"244fb64f-3d89-480f-b297-abc7a1b5a448\") " pod="openshift-must-gather-6jz7r/must-gather-bhc9h" Feb 24 09:55:35 crc kubenswrapper[4822]: I0224 09:55:35.137560 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/244fb64f-3d89-480f-b297-abc7a1b5a448-must-gather-output\") pod \"must-gather-bhc9h\" (UID: \"244fb64f-3d89-480f-b297-abc7a1b5a448\") " pod="openshift-must-gather-6jz7r/must-gather-bhc9h" Feb 24 09:55:35 crc kubenswrapper[4822]: I0224 09:55:35.154054 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqcwv\" (UniqueName: \"kubernetes.io/projected/244fb64f-3d89-480f-b297-abc7a1b5a448-kube-api-access-mqcwv\") pod \"must-gather-bhc9h\" (UID: \"244fb64f-3d89-480f-b297-abc7a1b5a448\") " pod="openshift-must-gather-6jz7r/must-gather-bhc9h" Feb 24 09:55:35 crc kubenswrapper[4822]: I0224 09:55:35.264308 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6jz7r/must-gather-bhc9h" Feb 24 09:55:35 crc kubenswrapper[4822]: I0224 09:55:35.755819 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6jz7r/must-gather-bhc9h"] Feb 24 09:55:35 crc kubenswrapper[4822]: I0224 09:55:35.765414 4822 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 09:55:35 crc kubenswrapper[4822]: I0224 09:55:35.850348 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6jz7r/must-gather-bhc9h" event={"ID":"244fb64f-3d89-480f-b297-abc7a1b5a448","Type":"ContainerStarted","Data":"7f913df13eca731b1bd8ce4a2f4ddd30305dd5301a5716f8fa0d8bab664076c3"} Feb 24 09:55:36 crc kubenswrapper[4822]: I0224 09:55:36.771065 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57522: no serving certificate available for the kubelet" Feb 24 09:55:36 crc kubenswrapper[4822]: I0224 09:55:36.815281 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57526: no serving certificate available for the kubelet" Feb 24 09:55:39 crc kubenswrapper[4822]: I0224 09:55:39.813174 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57538: no serving certificate available for the kubelet" Feb 24 09:55:39 crc kubenswrapper[4822]: I0224 09:55:39.848944 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57552: no serving certificate available for the kubelet" Feb 24 09:55:42 crc kubenswrapper[4822]: I0224 09:55:42.861773 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46788: no serving certificate available for the kubelet" Feb 24 09:55:42 crc kubenswrapper[4822]: I0224 09:55:42.901347 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46802: no serving certificate available for the kubelet" Feb 24 09:55:42 crc kubenswrapper[4822]: I0224 09:55:42.919213 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6jz7r/must-gather-bhc9h" event={"ID":"244fb64f-3d89-480f-b297-abc7a1b5a448","Type":"ContainerStarted","Data":"2872a84efb2502c4f702fbc9f79ff2cfae13d6d10a37633189c91f5ec5355d3c"} Feb 24 09:55:43 crc kubenswrapper[4822]: I0224 09:55:43.520583 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6jz7r/crc-debug-shgrh"] Feb 24 09:55:43 crc kubenswrapper[4822]: I0224 09:55:43.521474 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6jz7r/crc-debug-shgrh" Feb 24 09:55:43 crc kubenswrapper[4822]: I0224 09:55:43.579608 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcggs\" (UniqueName: \"kubernetes.io/projected/e168f6db-20a2-4746-9d95-655e12c3928a-kube-api-access-mcggs\") pod \"crc-debug-shgrh\" (UID: \"e168f6db-20a2-4746-9d95-655e12c3928a\") " pod="openshift-must-gather-6jz7r/crc-debug-shgrh" Feb 24 09:55:43 crc kubenswrapper[4822]: I0224 09:55:43.579682 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e168f6db-20a2-4746-9d95-655e12c3928a-host\") pod \"crc-debug-shgrh\" (UID: \"e168f6db-20a2-4746-9d95-655e12c3928a\") " pod="openshift-must-gather-6jz7r/crc-debug-shgrh" Feb 24 09:55:43 crc kubenswrapper[4822]: I0224 09:55:43.680806 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e168f6db-20a2-4746-9d95-655e12c3928a-host\") pod \"crc-debug-shgrh\" (UID: \"e168f6db-20a2-4746-9d95-655e12c3928a\") " pod="openshift-must-gather-6jz7r/crc-debug-shgrh" Feb 24 09:55:43 crc kubenswrapper[4822]: I0224 09:55:43.680929 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcggs\" (UniqueName: \"kubernetes.io/projected/e168f6db-20a2-4746-9d95-655e12c3928a-kube-api-access-mcggs\") pod \"crc-debug-shgrh\" (UID: \"e168f6db-20a2-4746-9d95-655e12c3928a\") " pod="openshift-must-gather-6jz7r/crc-debug-shgrh" Feb 24 09:55:43 crc kubenswrapper[4822]: I0224 09:55:43.680978 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e168f6db-20a2-4746-9d95-655e12c3928a-host\") pod \"crc-debug-shgrh\" (UID: \"e168f6db-20a2-4746-9d95-655e12c3928a\") " pod="openshift-must-gather-6jz7r/crc-debug-shgrh" Feb 24 09:55:43 crc kubenswrapper[4822]: I0224 09:55:43.709147 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcggs\" (UniqueName: \"kubernetes.io/projected/e168f6db-20a2-4746-9d95-655e12c3928a-kube-api-access-mcggs\") pod \"crc-debug-shgrh\" (UID: \"e168f6db-20a2-4746-9d95-655e12c3928a\") " pod="openshift-must-gather-6jz7r/crc-debug-shgrh" Feb 24 09:55:43 crc kubenswrapper[4822]: I0224 09:55:43.836872 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6jz7r/crc-debug-shgrh" Feb 24 09:55:43 crc kubenswrapper[4822]: W0224 09:55:43.912269 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode168f6db_20a2_4746_9d95_655e12c3928a.slice/crio-850e96b8f4eb446c626b63113b1ed278666c0a7212c7c6097d05a51056193ced WatchSource:0}: Error finding container 850e96b8f4eb446c626b63113b1ed278666c0a7212c7c6097d05a51056193ced: Status 404 returned error can't find the container with id 850e96b8f4eb446c626b63113b1ed278666c0a7212c7c6097d05a51056193ced Feb 24 09:55:43 crc kubenswrapper[4822]: I0224 09:55:43.976495 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6jz7r/must-gather-bhc9h" event={"ID":"244fb64f-3d89-480f-b297-abc7a1b5a448","Type":"ContainerStarted","Data":"17e0bcac5a2b98ab9d066c86f71a11e9d2952346bbda60a2db1dd0e647fddefe"} Feb 24 09:55:43 crc kubenswrapper[4822]: I0224 09:55:43.982858 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6jz7r/crc-debug-shgrh" event={"ID":"e168f6db-20a2-4746-9d95-655e12c3928a","Type":"ContainerStarted","Data":"850e96b8f4eb446c626b63113b1ed278666c0a7212c7c6097d05a51056193ced"} Feb 24 09:55:43 crc kubenswrapper[4822]: I0224 09:55:43.991497 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6jz7r/must-gather-bhc9h" podStartSLOduration=3.293599428 podStartE2EDuration="9.991484209s" podCreationTimestamp="2026-02-24 09:55:34 +0000 UTC" firstStartedPulling="2026-02-24 09:55:35.765221145 +0000 UTC m=+2858.152983693" lastFinishedPulling="2026-02-24 09:55:42.463105926 +0000 UTC m=+2864.850868474" observedRunningTime="2026-02-24 09:55:43.989355432 +0000 UTC m=+2866.377117980" watchObservedRunningTime="2026-02-24 09:55:43.991484209 +0000 UTC m=+2866.379246757" Feb 24 09:55:44 crc kubenswrapper[4822]: I0224 09:55:44.321109 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5dhcs"] Feb 24 09:55:44 crc kubenswrapper[4822]: I0224 09:55:44.323054 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5dhcs" Feb 24 09:55:44 crc kubenswrapper[4822]: I0224 09:55:44.350247 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5dhcs"] Feb 24 09:55:44 crc kubenswrapper[4822]: I0224 09:55:44.415679 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46fcc60c-5dd5-49d9-8203-7cee5c4bde97-catalog-content\") pod \"redhat-marketplace-5dhcs\" (UID: \"46fcc60c-5dd5-49d9-8203-7cee5c4bde97\") " pod="openshift-marketplace/redhat-marketplace-5dhcs" Feb 24 09:55:44 crc kubenswrapper[4822]: I0224 09:55:44.415742 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7njn\" (UniqueName: \"kubernetes.io/projected/46fcc60c-5dd5-49d9-8203-7cee5c4bde97-kube-api-access-l7njn\") pod \"redhat-marketplace-5dhcs\" (UID: \"46fcc60c-5dd5-49d9-8203-7cee5c4bde97\") " pod="openshift-marketplace/redhat-marketplace-5dhcs" Feb 24 09:55:44 crc kubenswrapper[4822]: I0224 09:55:44.415822 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46fcc60c-5dd5-49d9-8203-7cee5c4bde97-utilities\") pod \"redhat-marketplace-5dhcs\" (UID: \"46fcc60c-5dd5-49d9-8203-7cee5c4bde97\") " pod="openshift-marketplace/redhat-marketplace-5dhcs" Feb 24 09:55:44 crc kubenswrapper[4822]: I0224 09:55:44.516757 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46fcc60c-5dd5-49d9-8203-7cee5c4bde97-utilities\") pod \"redhat-marketplace-5dhcs\" (UID: \"46fcc60c-5dd5-49d9-8203-7cee5c4bde97\") " pod="openshift-marketplace/redhat-marketplace-5dhcs" Feb 24 09:55:44 crc kubenswrapper[4822]: I0224 09:55:44.516824 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46fcc60c-5dd5-49d9-8203-7cee5c4bde97-catalog-content\") pod \"redhat-marketplace-5dhcs\" (UID: \"46fcc60c-5dd5-49d9-8203-7cee5c4bde97\") " pod="openshift-marketplace/redhat-marketplace-5dhcs" Feb 24 09:55:44 crc kubenswrapper[4822]: I0224 09:55:44.516866 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7njn\" (UniqueName: \"kubernetes.io/projected/46fcc60c-5dd5-49d9-8203-7cee5c4bde97-kube-api-access-l7njn\") pod \"redhat-marketplace-5dhcs\" (UID: \"46fcc60c-5dd5-49d9-8203-7cee5c4bde97\") " pod="openshift-marketplace/redhat-marketplace-5dhcs" Feb 24 09:55:44 crc kubenswrapper[4822]: I0224 09:55:44.517450 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46fcc60c-5dd5-49d9-8203-7cee5c4bde97-catalog-content\") pod \"redhat-marketplace-5dhcs\" (UID: \"46fcc60c-5dd5-49d9-8203-7cee5c4bde97\") " pod="openshift-marketplace/redhat-marketplace-5dhcs" Feb 24 09:55:44 crc kubenswrapper[4822]: I0224 09:55:44.517460 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46fcc60c-5dd5-49d9-8203-7cee5c4bde97-utilities\") pod \"redhat-marketplace-5dhcs\" (UID: \"46fcc60c-5dd5-49d9-8203-7cee5c4bde97\") " pod="openshift-marketplace/redhat-marketplace-5dhcs" Feb 24 09:55:44 crc kubenswrapper[4822]: I0224 09:55:44.535184 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7njn\" (UniqueName: \"kubernetes.io/projected/46fcc60c-5dd5-49d9-8203-7cee5c4bde97-kube-api-access-l7njn\") pod \"redhat-marketplace-5dhcs\" (UID: \"46fcc60c-5dd5-49d9-8203-7cee5c4bde97\") " pod="openshift-marketplace/redhat-marketplace-5dhcs" Feb 24 09:55:44 crc kubenswrapper[4822]: I0224 09:55:44.666531 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5dhcs" Feb 24 09:55:44 crc kubenswrapper[4822]: I0224 09:55:44.932128 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46818: no serving certificate available for the kubelet" Feb 24 09:55:45 crc kubenswrapper[4822]: I0224 09:55:45.121072 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5dhcs"] Feb 24 09:55:45 crc kubenswrapper[4822]: W0224 09:55:45.137171 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46fcc60c_5dd5_49d9_8203_7cee5c4bde97.slice/crio-83201ed6b6e715c99b4cc66f491c7c7590b62dee4e7b7b33c70fd1a3ca432f31 WatchSource:0}: Error finding container 83201ed6b6e715c99b4cc66f491c7c7590b62dee4e7b7b33c70fd1a3ca432f31: Status 404 returned error can't find the container with id 83201ed6b6e715c99b4cc66f491c7c7590b62dee4e7b7b33c70fd1a3ca432f31 Feb 24 09:55:45 crc kubenswrapper[4822]: I0224 09:55:45.903754 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46820: no serving certificate available for the kubelet" Feb 24 09:55:45 crc kubenswrapper[4822]: I0224 09:55:45.945790 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46834: no serving certificate available for the kubelet" Feb 24 09:55:45 crc kubenswrapper[4822]: I0224 09:55:45.999668 4822 generic.go:334] "Generic (PLEG): container finished" podID="46fcc60c-5dd5-49d9-8203-7cee5c4bde97" containerID="979ca48bb4f769cdc5da2888b4acdf7bbbc3408b51351f918cc0857d682efd09" exitCode=0 Feb 24 09:55:45 crc kubenswrapper[4822]: I0224 09:55:45.999721 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5dhcs" event={"ID":"46fcc60c-5dd5-49d9-8203-7cee5c4bde97","Type":"ContainerDied","Data":"979ca48bb4f769cdc5da2888b4acdf7bbbc3408b51351f918cc0857d682efd09"} Feb 24 09:55:45 crc kubenswrapper[4822]: I0224 09:55:45.999780 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5dhcs" event={"ID":"46fcc60c-5dd5-49d9-8203-7cee5c4bde97","Type":"ContainerStarted","Data":"83201ed6b6e715c99b4cc66f491c7c7590b62dee4e7b7b33c70fd1a3ca432f31"} Feb 24 09:55:48 crc kubenswrapper[4822]: I0224 09:55:48.037459 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5dhcs" event={"ID":"46fcc60c-5dd5-49d9-8203-7cee5c4bde97","Type":"ContainerStarted","Data":"6fa44fd11d5f7503f8b7064a5764e153375063592a4da3c727b5010fcec5922f"} Feb 24 09:55:48 crc kubenswrapper[4822]: E0224 09:55:48.780361 4822 certificate_manager.go:579] "Unhandled Error" err="kubernetes.io/kubelet-serving: certificate request was not signed: timed out waiting for the condition" logger="UnhandledError" Feb 24 09:55:48 crc kubenswrapper[4822]: I0224 09:55:48.943128 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46848: no serving certificate available for the kubelet" Feb 24 09:55:48 crc kubenswrapper[4822]: I0224 09:55:48.984828 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46858: no serving certificate available for the kubelet" Feb 24 09:55:49 crc kubenswrapper[4822]: I0224 09:55:49.049359 4822 generic.go:334] "Generic (PLEG): container finished" podID="46fcc60c-5dd5-49d9-8203-7cee5c4bde97" containerID="6fa44fd11d5f7503f8b7064a5764e153375063592a4da3c727b5010fcec5922f" exitCode=0 Feb 24 09:55:49 crc kubenswrapper[4822]: I0224 09:55:49.049412 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5dhcs" event={"ID":"46fcc60c-5dd5-49d9-8203-7cee5c4bde97","Type":"ContainerDied","Data":"6fa44fd11d5f7503f8b7064a5764e153375063592a4da3c727b5010fcec5922f"} Feb 24 09:55:51 crc kubenswrapper[4822]: I0224 09:55:51.981644 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38912: no serving certificate available for the kubelet" Feb 24 09:55:52 crc kubenswrapper[4822]: I0224 09:55:52.024221 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38928: no serving certificate available for the kubelet" Feb 24 09:55:52 crc kubenswrapper[4822]: I0224 09:55:52.739983 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mhtfm"] Feb 24 09:55:52 crc kubenswrapper[4822]: I0224 09:55:52.745065 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhtfm" Feb 24 09:55:52 crc kubenswrapper[4822]: I0224 09:55:52.751542 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mhtfm"] Feb 24 09:55:52 crc kubenswrapper[4822]: I0224 09:55:52.753756 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29c22f6a-a168-499b-a599-fe62be2b8e5d-catalog-content\") pod \"certified-operators-mhtfm\" (UID: \"29c22f6a-a168-499b-a599-fe62be2b8e5d\") " pod="openshift-marketplace/certified-operators-mhtfm" Feb 24 09:55:52 crc kubenswrapper[4822]: I0224 09:55:52.753896 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfxs9\" (UniqueName: \"kubernetes.io/projected/29c22f6a-a168-499b-a599-fe62be2b8e5d-kube-api-access-mfxs9\") pod \"certified-operators-mhtfm\" (UID: \"29c22f6a-a168-499b-a599-fe62be2b8e5d\") " pod="openshift-marketplace/certified-operators-mhtfm" Feb 24 09:55:52 crc kubenswrapper[4822]: I0224 09:55:52.753936 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29c22f6a-a168-499b-a599-fe62be2b8e5d-utilities\") pod \"certified-operators-mhtfm\" (UID: \"29c22f6a-a168-499b-a599-fe62be2b8e5d\") " pod="openshift-marketplace/certified-operators-mhtfm" Feb 24 09:55:52 crc kubenswrapper[4822]: I0224 09:55:52.854772 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfxs9\" (UniqueName: \"kubernetes.io/projected/29c22f6a-a168-499b-a599-fe62be2b8e5d-kube-api-access-mfxs9\") pod \"certified-operators-mhtfm\" (UID: \"29c22f6a-a168-499b-a599-fe62be2b8e5d\") " pod="openshift-marketplace/certified-operators-mhtfm" Feb 24 09:55:52 crc kubenswrapper[4822]: I0224 09:55:52.855047 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29c22f6a-a168-499b-a599-fe62be2b8e5d-utilities\") pod \"certified-operators-mhtfm\" (UID: \"29c22f6a-a168-499b-a599-fe62be2b8e5d\") " pod="openshift-marketplace/certified-operators-mhtfm" Feb 24 09:55:52 crc kubenswrapper[4822]: I0224 09:55:52.855155 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29c22f6a-a168-499b-a599-fe62be2b8e5d-catalog-content\") pod \"certified-operators-mhtfm\" (UID: \"29c22f6a-a168-499b-a599-fe62be2b8e5d\") " pod="openshift-marketplace/certified-operators-mhtfm" Feb 24 09:55:52 crc kubenswrapper[4822]: I0224 09:55:52.855680 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29c22f6a-a168-499b-a599-fe62be2b8e5d-catalog-content\") pod \"certified-operators-mhtfm\" (UID: \"29c22f6a-a168-499b-a599-fe62be2b8e5d\") " pod="openshift-marketplace/certified-operators-mhtfm" Feb 24 09:55:52 crc kubenswrapper[4822]: I0224 09:55:52.855675 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29c22f6a-a168-499b-a599-fe62be2b8e5d-utilities\") pod \"certified-operators-mhtfm\" (UID: \"29c22f6a-a168-499b-a599-fe62be2b8e5d\") " pod="openshift-marketplace/certified-operators-mhtfm" Feb 24 09:55:52 crc kubenswrapper[4822]: I0224 09:55:52.888688 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfxs9\" (UniqueName: \"kubernetes.io/projected/29c22f6a-a168-499b-a599-fe62be2b8e5d-kube-api-access-mfxs9\") pod \"certified-operators-mhtfm\" (UID: \"29c22f6a-a168-499b-a599-fe62be2b8e5d\") " pod="openshift-marketplace/certified-operators-mhtfm" Feb 24 09:55:53 crc kubenswrapper[4822]: I0224 09:55:53.063135 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhtfm" Feb 24 09:55:55 crc kubenswrapper[4822]: I0224 09:55:55.030005 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38940: no serving certificate available for the kubelet" Feb 24 09:55:55 crc kubenswrapper[4822]: I0224 09:55:55.079387 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38942: no serving certificate available for the kubelet" Feb 24 09:55:55 crc kubenswrapper[4822]: I0224 09:55:55.430659 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mhtfm"] Feb 24 09:55:55 crc kubenswrapper[4822]: W0224 09:55:55.442392 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29c22f6a_a168_499b_a599_fe62be2b8e5d.slice/crio-f9ca2b7311f24c1dae441402325b6838f4ca0fa6d02d7489fc78259484475ca7 WatchSource:0}: Error finding container f9ca2b7311f24c1dae441402325b6838f4ca0fa6d02d7489fc78259484475ca7: Status 404 returned error can't find the container with id f9ca2b7311f24c1dae441402325b6838f4ca0fa6d02d7489fc78259484475ca7 Feb 24 09:55:56 crc kubenswrapper[4822]: I0224 09:55:56.111198 4822 generic.go:334] "Generic (PLEG): container finished" podID="29c22f6a-a168-499b-a599-fe62be2b8e5d" containerID="41c930fa7f636abcd3976e1198375f1b0e39e98c590a4628739faefcd3513051" exitCode=0 Feb 24 09:55:56 crc kubenswrapper[4822]: I0224 09:55:56.111258 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhtfm" event={"ID":"29c22f6a-a168-499b-a599-fe62be2b8e5d","Type":"ContainerDied","Data":"41c930fa7f636abcd3976e1198375f1b0e39e98c590a4628739faefcd3513051"} Feb 24 09:55:56 crc kubenswrapper[4822]: I0224 09:55:56.111506 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhtfm" event={"ID":"29c22f6a-a168-499b-a599-fe62be2b8e5d","Type":"ContainerStarted","Data":"f9ca2b7311f24c1dae441402325b6838f4ca0fa6d02d7489fc78259484475ca7"} Feb 24 09:55:56 crc kubenswrapper[4822]: I0224 09:55:56.113210 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6jz7r/crc-debug-shgrh" event={"ID":"e168f6db-20a2-4746-9d95-655e12c3928a","Type":"ContainerStarted","Data":"4ffba5ab9b050dc2fc8dc72d90229d263bf102a0d1ab756784860308dd382bcb"} Feb 24 09:55:56 crc kubenswrapper[4822]: I0224 09:55:56.117032 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5dhcs" event={"ID":"46fcc60c-5dd5-49d9-8203-7cee5c4bde97","Type":"ContainerStarted","Data":"cc665eebdb7deff12f883d2475eab49f61a3078f50b2e29073626e5c8861d7e6"} Feb 24 09:55:56 crc kubenswrapper[4822]: I0224 09:55:56.174012 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5dhcs" podStartSLOduration=2.869639563 podStartE2EDuration="12.173990773s" podCreationTimestamp="2026-02-24 09:55:44 +0000 UTC" firstStartedPulling="2026-02-24 09:55:46.002107132 +0000 UTC m=+2868.389869670" lastFinishedPulling="2026-02-24 09:55:55.306458292 +0000 UTC m=+2877.694220880" observedRunningTime="2026-02-24 09:55:56.168409641 +0000 UTC m=+2878.556172189" watchObservedRunningTime="2026-02-24 09:55:56.173990773 +0000 UTC m=+2878.561753331" Feb 24 09:55:56 crc kubenswrapper[4822]: I0224 09:55:56.191002 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6jz7r/crc-debug-shgrh" podStartSLOduration=2.055775849 podStartE2EDuration="13.190983665s" podCreationTimestamp="2026-02-24 09:55:43 +0000 UTC" firstStartedPulling="2026-02-24 09:55:43.914953029 +0000 UTC m=+2866.302715587" lastFinishedPulling="2026-02-24 09:55:55.050160855 +0000 UTC m=+2877.437923403" observedRunningTime="2026-02-24 09:55:56.185507256 +0000 UTC m=+2878.573269814" watchObservedRunningTime="2026-02-24 09:55:56.190983665 +0000 UTC m=+2878.578746223" Feb 24 09:55:56 crc kubenswrapper[4822]: I0224 09:55:56.193315 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38946: no serving certificate available for the kubelet" Feb 24 09:55:56 crc kubenswrapper[4822]: I0224 09:55:56.206941 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6jz7r/crc-debug-shgrh"] Feb 24 09:55:56 crc kubenswrapper[4822]: I0224 09:55:56.219366 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6jz7r/crc-debug-shgrh"] Feb 24 09:55:57 crc kubenswrapper[4822]: I0224 09:55:57.196998 4822 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 24 09:55:57 crc kubenswrapper[4822]: I0224 09:55:57.207810 4822 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 24 09:55:57 crc kubenswrapper[4822]: I0224 09:55:57.236200 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38950: no serving certificate available for the kubelet" Feb 24 09:55:57 crc kubenswrapper[4822]: I0224 09:55:57.274289 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38958: no serving certificate available for the kubelet" Feb 24 09:55:57 crc kubenswrapper[4822]: I0224 09:55:57.317580 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38968: no serving certificate available for the kubelet" Feb 24 09:55:57 crc kubenswrapper[4822]: I0224 09:55:57.370247 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38970: no serving certificate available for the kubelet" Feb 24 09:55:57 crc kubenswrapper[4822]: I0224 09:55:57.470360 4822 ???:1] "http: TLS handshake error from 192.168.126.11:38986: no serving certificate available for the kubelet" Feb 24 09:55:57 crc kubenswrapper[4822]: I0224 09:55:57.587735 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39000: no serving certificate available for the kubelet" Feb 24 09:55:57 crc kubenswrapper[4822]: I0224 09:55:57.776238 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39010: no serving certificate available for the kubelet" Feb 24 09:55:58 crc kubenswrapper[4822]: I0224 09:55:58.085087 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39012: no serving certificate available for the kubelet" Feb 24 09:55:58 crc kubenswrapper[4822]: I0224 09:55:58.121105 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39024: no serving certificate available for the kubelet" Feb 24 09:55:58 crc kubenswrapper[4822]: I0224 09:55:58.125780 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39026: no serving certificate available for the kubelet" Feb 24 09:55:58 crc kubenswrapper[4822]: I0224 09:55:58.134575 4822 generic.go:334] "Generic (PLEG): container finished" podID="29c22f6a-a168-499b-a599-fe62be2b8e5d" containerID="701c15933e031ab54882193531f53ec42d64d0852b105c3303647cc8e47f4e49" exitCode=0 Feb 24 09:55:58 crc kubenswrapper[4822]: I0224 09:55:58.134716 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-6jz7r/crc-debug-shgrh" podUID="e168f6db-20a2-4746-9d95-655e12c3928a" containerName="container-00" containerID="cri-o://4ffba5ab9b050dc2fc8dc72d90229d263bf102a0d1ab756784860308dd382bcb" gracePeriod=2 Feb 24 09:55:58 crc kubenswrapper[4822]: I0224 09:55:58.135087 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhtfm" event={"ID":"29c22f6a-a168-499b-a599-fe62be2b8e5d","Type":"ContainerDied","Data":"701c15933e031ab54882193531f53ec42d64d0852b105c3303647cc8e47f4e49"} Feb 24 09:55:58 crc kubenswrapper[4822]: I0224 09:55:58.790307 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39036: no serving certificate available for the kubelet" Feb 24 09:55:58 crc kubenswrapper[4822]: I0224 09:55:58.890931 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6jz7r/crc-debug-shgrh" Feb 24 09:55:59 crc kubenswrapper[4822]: I0224 09:55:59.050137 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcggs\" (UniqueName: \"kubernetes.io/projected/e168f6db-20a2-4746-9d95-655e12c3928a-kube-api-access-mcggs\") pod \"e168f6db-20a2-4746-9d95-655e12c3928a\" (UID: \"e168f6db-20a2-4746-9d95-655e12c3928a\") " Feb 24 09:55:59 crc kubenswrapper[4822]: I0224 09:55:59.050318 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e168f6db-20a2-4746-9d95-655e12c3928a-host\") pod \"e168f6db-20a2-4746-9d95-655e12c3928a\" (UID: \"e168f6db-20a2-4746-9d95-655e12c3928a\") " Feb 24 09:55:59 crc kubenswrapper[4822]: I0224 09:55:59.050386 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e168f6db-20a2-4746-9d95-655e12c3928a-host" (OuterVolumeSpecName: "host") pod "e168f6db-20a2-4746-9d95-655e12c3928a" (UID: "e168f6db-20a2-4746-9d95-655e12c3928a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:55:59 crc kubenswrapper[4822]: I0224 09:55:59.050629 4822 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e168f6db-20a2-4746-9d95-655e12c3928a-host\") on node \"crc\" DevicePath \"\"" Feb 24 09:55:59 crc kubenswrapper[4822]: I0224 09:55:59.055620 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e168f6db-20a2-4746-9d95-655e12c3928a-kube-api-access-mcggs" (OuterVolumeSpecName: "kube-api-access-mcggs") pod "e168f6db-20a2-4746-9d95-655e12c3928a" (UID: "e168f6db-20a2-4746-9d95-655e12c3928a"). InnerVolumeSpecName "kube-api-access-mcggs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:55:59 crc kubenswrapper[4822]: I0224 09:55:59.142342 4822 generic.go:334] "Generic (PLEG): container finished" podID="e168f6db-20a2-4746-9d95-655e12c3928a" containerID="4ffba5ab9b050dc2fc8dc72d90229d263bf102a0d1ab756784860308dd382bcb" exitCode=143 Feb 24 09:55:59 crc kubenswrapper[4822]: I0224 09:55:59.142407 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6jz7r/crc-debug-shgrh" Feb 24 09:55:59 crc kubenswrapper[4822]: I0224 09:55:59.142451 4822 scope.go:117] "RemoveContainer" containerID="4ffba5ab9b050dc2fc8dc72d90229d263bf102a0d1ab756784860308dd382bcb" Feb 24 09:55:59 crc kubenswrapper[4822]: I0224 09:55:59.147074 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhtfm" event={"ID":"29c22f6a-a168-499b-a599-fe62be2b8e5d","Type":"ContainerStarted","Data":"d8eb02c892380eeff83a93d9aa3bc8e4c60c9ab4acdec249197f69a8ce2b0a58"} Feb 24 09:55:59 crc kubenswrapper[4822]: I0224 09:55:59.152254 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcggs\" (UniqueName: \"kubernetes.io/projected/e168f6db-20a2-4746-9d95-655e12c3928a-kube-api-access-mcggs\") on node \"crc\" DevicePath \"\"" Feb 24 09:55:59 crc kubenswrapper[4822]: I0224 09:55:59.166878 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mhtfm" podStartSLOduration=4.500605521 podStartE2EDuration="7.166853585s" podCreationTimestamp="2026-02-24 09:55:52 +0000 UTC" firstStartedPulling="2026-02-24 09:55:56.11503964 +0000 UTC m=+2878.502802198" lastFinishedPulling="2026-02-24 09:55:58.781287714 +0000 UTC m=+2881.169050262" observedRunningTime="2026-02-24 09:55:59.164599403 +0000 UTC m=+2881.552361951" watchObservedRunningTime="2026-02-24 09:55:59.166853585 +0000 UTC m=+2881.554616123" Feb 24 09:55:59 crc kubenswrapper[4822]: I0224 09:55:59.182560 4822 scope.go:117] "RemoveContainer" containerID="4ffba5ab9b050dc2fc8dc72d90229d263bf102a0d1ab756784860308dd382bcb" Feb 24 09:55:59 crc kubenswrapper[4822]: E0224 09:55:59.183110 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ffba5ab9b050dc2fc8dc72d90229d263bf102a0d1ab756784860308dd382bcb\": container with ID starting with 4ffba5ab9b050dc2fc8dc72d90229d263bf102a0d1ab756784860308dd382bcb not found: ID does not exist" containerID="4ffba5ab9b050dc2fc8dc72d90229d263bf102a0d1ab756784860308dd382bcb" Feb 24 09:55:59 crc kubenswrapper[4822]: I0224 09:55:59.183165 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ffba5ab9b050dc2fc8dc72d90229d263bf102a0d1ab756784860308dd382bcb"} err="failed to get container status \"4ffba5ab9b050dc2fc8dc72d90229d263bf102a0d1ab756784860308dd382bcb\": rpc error: code = NotFound desc = could not find container \"4ffba5ab9b050dc2fc8dc72d90229d263bf102a0d1ab756784860308dd382bcb\": container with ID starting with 4ffba5ab9b050dc2fc8dc72d90229d263bf102a0d1ab756784860308dd382bcb not found: ID does not exist" Feb 24 09:56:00 crc kubenswrapper[4822]: I0224 09:56:00.100357 4822 ???:1] "http: TLS handshake error from 192.168.126.11:39044: no serving certificate available for the kubelet" Feb 24 09:56:00 crc kubenswrapper[4822]: I0224 09:56:00.347625 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e168f6db-20a2-4746-9d95-655e12c3928a" path="/var/lib/kubelet/pods/e168f6db-20a2-4746-9d95-655e12c3928a/volumes" Feb 24 09:56:01 crc kubenswrapper[4822]: I0224 09:56:01.130430 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59646: no serving certificate available for the kubelet" Feb 24 09:56:01 crc kubenswrapper[4822]: I0224 09:56:01.166206 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59660: no serving certificate available for the kubelet" Feb 24 09:56:02 crc kubenswrapper[4822]: I0224 09:56:02.687284 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59662: no serving certificate available for the kubelet" Feb 24 09:56:03 crc kubenswrapper[4822]: I0224 09:56:03.063363 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mhtfm" Feb 24 09:56:03 crc kubenswrapper[4822]: I0224 09:56:03.063432 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mhtfm" Feb 24 09:56:04 crc kubenswrapper[4822]: I0224 09:56:04.102836 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-mhtfm" podUID="29c22f6a-a168-499b-a599-fe62be2b8e5d" containerName="registry-server" probeResult="failure" output=< Feb 24 09:56:04 crc kubenswrapper[4822]: timeout: failed to connect service ":50051" within 1s Feb 24 09:56:04 crc kubenswrapper[4822]: > Feb 24 09:56:04 crc kubenswrapper[4822]: I0224 09:56:04.163956 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59676: no serving certificate available for the kubelet" Feb 24 09:56:04 crc kubenswrapper[4822]: I0224 09:56:04.204803 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59686: no serving certificate available for the kubelet" Feb 24 09:56:04 crc kubenswrapper[4822]: I0224 09:56:04.667801 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5dhcs" Feb 24 09:56:04 crc kubenswrapper[4822]: I0224 09:56:04.668149 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5dhcs" Feb 24 09:56:04 crc kubenswrapper[4822]: I0224 09:56:04.715878 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5dhcs" Feb 24 09:56:05 crc kubenswrapper[4822]: I0224 09:56:05.243622 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5dhcs" Feb 24 09:56:05 crc kubenswrapper[4822]: I0224 09:56:05.296507 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5dhcs"] Feb 24 09:56:07 crc kubenswrapper[4822]: I0224 09:56:07.218310 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5dhcs" podUID="46fcc60c-5dd5-49d9-8203-7cee5c4bde97" containerName="registry-server" containerID="cri-o://cc665eebdb7deff12f883d2475eab49f61a3078f50b2e29073626e5c8861d7e6" gracePeriod=2 Feb 24 09:56:07 crc kubenswrapper[4822]: I0224 09:56:07.224029 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59688: no serving certificate available for the kubelet" Feb 24 09:56:07 crc kubenswrapper[4822]: I0224 09:56:07.267495 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59702: no serving certificate available for the kubelet" Feb 24 09:56:07 crc kubenswrapper[4822]: I0224 09:56:07.649660 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5dhcs" Feb 24 09:56:07 crc kubenswrapper[4822]: I0224 09:56:07.790340 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46fcc60c-5dd5-49d9-8203-7cee5c4bde97-catalog-content\") pod \"46fcc60c-5dd5-49d9-8203-7cee5c4bde97\" (UID: \"46fcc60c-5dd5-49d9-8203-7cee5c4bde97\") " Feb 24 09:56:07 crc kubenswrapper[4822]: I0224 09:56:07.790427 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46fcc60c-5dd5-49d9-8203-7cee5c4bde97-utilities\") pod \"46fcc60c-5dd5-49d9-8203-7cee5c4bde97\" (UID: \"46fcc60c-5dd5-49d9-8203-7cee5c4bde97\") " Feb 24 09:56:07 crc kubenswrapper[4822]: I0224 09:56:07.790559 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7njn\" (UniqueName: \"kubernetes.io/projected/46fcc60c-5dd5-49d9-8203-7cee5c4bde97-kube-api-access-l7njn\") pod \"46fcc60c-5dd5-49d9-8203-7cee5c4bde97\" (UID: \"46fcc60c-5dd5-49d9-8203-7cee5c4bde97\") " Feb 24 09:56:07 crc kubenswrapper[4822]: I0224 09:56:07.790906 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46fcc60c-5dd5-49d9-8203-7cee5c4bde97-utilities" (OuterVolumeSpecName: "utilities") pod "46fcc60c-5dd5-49d9-8203-7cee5c4bde97" (UID: "46fcc60c-5dd5-49d9-8203-7cee5c4bde97"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:56:07 crc kubenswrapper[4822]: I0224 09:56:07.795979 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46fcc60c-5dd5-49d9-8203-7cee5c4bde97-kube-api-access-l7njn" (OuterVolumeSpecName: "kube-api-access-l7njn") pod "46fcc60c-5dd5-49d9-8203-7cee5c4bde97" (UID: "46fcc60c-5dd5-49d9-8203-7cee5c4bde97"). InnerVolumeSpecName "kube-api-access-l7njn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:07 crc kubenswrapper[4822]: I0224 09:56:07.816600 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46fcc60c-5dd5-49d9-8203-7cee5c4bde97-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46fcc60c-5dd5-49d9-8203-7cee5c4bde97" (UID: "46fcc60c-5dd5-49d9-8203-7cee5c4bde97"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:56:07 crc kubenswrapper[4822]: I0224 09:56:07.835843 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59714: no serving certificate available for the kubelet" Feb 24 09:56:07 crc kubenswrapper[4822]: I0224 09:56:07.891962 4822 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46fcc60c-5dd5-49d9-8203-7cee5c4bde97-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:07 crc kubenswrapper[4822]: I0224 09:56:07.891995 4822 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46fcc60c-5dd5-49d9-8203-7cee5c4bde97-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:07 crc kubenswrapper[4822]: I0224 09:56:07.892004 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7njn\" (UniqueName: \"kubernetes.io/projected/46fcc60c-5dd5-49d9-8203-7cee5c4bde97-kube-api-access-l7njn\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:08 crc kubenswrapper[4822]: I0224 09:56:08.230021 4822 generic.go:334] "Generic (PLEG): container finished" podID="46fcc60c-5dd5-49d9-8203-7cee5c4bde97" containerID="cc665eebdb7deff12f883d2475eab49f61a3078f50b2e29073626e5c8861d7e6" exitCode=0 Feb 24 09:56:08 crc kubenswrapper[4822]: I0224 09:56:08.230070 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5dhcs" event={"ID":"46fcc60c-5dd5-49d9-8203-7cee5c4bde97","Type":"ContainerDied","Data":"cc665eebdb7deff12f883d2475eab49f61a3078f50b2e29073626e5c8861d7e6"} Feb 24 09:56:08 crc kubenswrapper[4822]: I0224 09:56:08.230097 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5dhcs" Feb 24 09:56:08 crc kubenswrapper[4822]: I0224 09:56:08.230111 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5dhcs" event={"ID":"46fcc60c-5dd5-49d9-8203-7cee5c4bde97","Type":"ContainerDied","Data":"83201ed6b6e715c99b4cc66f491c7c7590b62dee4e7b7b33c70fd1a3ca432f31"} Feb 24 09:56:08 crc kubenswrapper[4822]: I0224 09:56:08.230134 4822 scope.go:117] "RemoveContainer" containerID="cc665eebdb7deff12f883d2475eab49f61a3078f50b2e29073626e5c8861d7e6" Feb 24 09:56:08 crc kubenswrapper[4822]: I0224 09:56:08.251616 4822 scope.go:117] "RemoveContainer" containerID="6fa44fd11d5f7503f8b7064a5764e153375063592a4da3c727b5010fcec5922f" Feb 24 09:56:08 crc kubenswrapper[4822]: I0224 09:56:08.274033 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5dhcs"] Feb 24 09:56:08 crc kubenswrapper[4822]: I0224 09:56:08.277139 4822 scope.go:117] "RemoveContainer" containerID="979ca48bb4f769cdc5da2888b4acdf7bbbc3408b51351f918cc0857d682efd09" Feb 24 09:56:08 crc kubenswrapper[4822]: I0224 09:56:08.282044 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5dhcs"] Feb 24 09:56:08 crc kubenswrapper[4822]: I0224 09:56:08.316650 4822 scope.go:117] "RemoveContainer" containerID="cc665eebdb7deff12f883d2475eab49f61a3078f50b2e29073626e5c8861d7e6" Feb 24 09:56:08 crc kubenswrapper[4822]: E0224 09:56:08.317115 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc665eebdb7deff12f883d2475eab49f61a3078f50b2e29073626e5c8861d7e6\": container with ID starting with cc665eebdb7deff12f883d2475eab49f61a3078f50b2e29073626e5c8861d7e6 not found: ID does not exist" containerID="cc665eebdb7deff12f883d2475eab49f61a3078f50b2e29073626e5c8861d7e6" Feb 24 09:56:08 crc kubenswrapper[4822]: I0224 09:56:08.317155 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc665eebdb7deff12f883d2475eab49f61a3078f50b2e29073626e5c8861d7e6"} err="failed to get container status \"cc665eebdb7deff12f883d2475eab49f61a3078f50b2e29073626e5c8861d7e6\": rpc error: code = NotFound desc = could not find container \"cc665eebdb7deff12f883d2475eab49f61a3078f50b2e29073626e5c8861d7e6\": container with ID starting with cc665eebdb7deff12f883d2475eab49f61a3078f50b2e29073626e5c8861d7e6 not found: ID does not exist" Feb 24 09:56:08 crc kubenswrapper[4822]: I0224 09:56:08.317180 4822 scope.go:117] "RemoveContainer" containerID="6fa44fd11d5f7503f8b7064a5764e153375063592a4da3c727b5010fcec5922f" Feb 24 09:56:08 crc kubenswrapper[4822]: E0224 09:56:08.317570 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fa44fd11d5f7503f8b7064a5764e153375063592a4da3c727b5010fcec5922f\": container with ID starting with 6fa44fd11d5f7503f8b7064a5764e153375063592a4da3c727b5010fcec5922f not found: ID does not exist" containerID="6fa44fd11d5f7503f8b7064a5764e153375063592a4da3c727b5010fcec5922f" Feb 24 09:56:08 crc kubenswrapper[4822]: I0224 09:56:08.317601 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fa44fd11d5f7503f8b7064a5764e153375063592a4da3c727b5010fcec5922f"} err="failed to get container status \"6fa44fd11d5f7503f8b7064a5764e153375063592a4da3c727b5010fcec5922f\": rpc error: code = NotFound desc = could not find container \"6fa44fd11d5f7503f8b7064a5764e153375063592a4da3c727b5010fcec5922f\": container with ID starting with 6fa44fd11d5f7503f8b7064a5764e153375063592a4da3c727b5010fcec5922f not found: ID does not exist" Feb 24 09:56:08 crc kubenswrapper[4822]: I0224 09:56:08.317621 4822 scope.go:117] "RemoveContainer" containerID="979ca48bb4f769cdc5da2888b4acdf7bbbc3408b51351f918cc0857d682efd09" Feb 24 09:56:08 crc kubenswrapper[4822]: E0224 09:56:08.317884 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"979ca48bb4f769cdc5da2888b4acdf7bbbc3408b51351f918cc0857d682efd09\": container with ID starting with 979ca48bb4f769cdc5da2888b4acdf7bbbc3408b51351f918cc0857d682efd09 not found: ID does not exist" containerID="979ca48bb4f769cdc5da2888b4acdf7bbbc3408b51351f918cc0857d682efd09" Feb 24 09:56:08 crc kubenswrapper[4822]: I0224 09:56:08.317951 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"979ca48bb4f769cdc5da2888b4acdf7bbbc3408b51351f918cc0857d682efd09"} err="failed to get container status \"979ca48bb4f769cdc5da2888b4acdf7bbbc3408b51351f918cc0857d682efd09\": rpc error: code = NotFound desc = could not find container \"979ca48bb4f769cdc5da2888b4acdf7bbbc3408b51351f918cc0857d682efd09\": container with ID starting with 979ca48bb4f769cdc5da2888b4acdf7bbbc3408b51351f918cc0857d682efd09 not found: ID does not exist" Feb 24 09:56:08 crc kubenswrapper[4822]: I0224 09:56:08.348746 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46fcc60c-5dd5-49d9-8203-7cee5c4bde97" path="/var/lib/kubelet/pods/46fcc60c-5dd5-49d9-8203-7cee5c4bde97/volumes" Feb 24 09:56:10 crc kubenswrapper[4822]: I0224 09:56:10.262023 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59730: no serving certificate available for the kubelet" Feb 24 09:56:10 crc kubenswrapper[4822]: I0224 09:56:10.318332 4822 ???:1] "http: TLS handshake error from 192.168.126.11:59744: no serving certificate available for the kubelet" Feb 24 09:56:13 crc kubenswrapper[4822]: I0224 09:56:13.144133 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mhtfm" Feb 24 09:56:13 crc kubenswrapper[4822]: I0224 09:56:13.215562 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mhtfm" Feb 24 09:56:13 crc kubenswrapper[4822]: I0224 09:56:13.308359 4822 ???:1] "http: TLS handshake error from 192.168.126.11:55248: no serving certificate available for the kubelet" Feb 24 09:56:13 crc kubenswrapper[4822]: I0224 09:56:13.371733 4822 ???:1] "http: TLS handshake error from 192.168.126.11:55254: no serving certificate available for the kubelet" Feb 24 09:56:13 crc kubenswrapper[4822]: I0224 09:56:13.394865 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mhtfm"] Feb 24 09:56:14 crc kubenswrapper[4822]: I0224 09:56:14.290617 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mhtfm" podUID="29c22f6a-a168-499b-a599-fe62be2b8e5d" containerName="registry-server" containerID="cri-o://d8eb02c892380eeff83a93d9aa3bc8e4c60c9ab4acdec249197f69a8ce2b0a58" gracePeriod=2 Feb 24 09:56:14 crc kubenswrapper[4822]: I0224 09:56:14.721961 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhtfm" Feb 24 09:56:14 crc kubenswrapper[4822]: I0224 09:56:14.812177 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29c22f6a-a168-499b-a599-fe62be2b8e5d-utilities\") pod \"29c22f6a-a168-499b-a599-fe62be2b8e5d\" (UID: \"29c22f6a-a168-499b-a599-fe62be2b8e5d\") " Feb 24 09:56:14 crc kubenswrapper[4822]: I0224 09:56:14.812248 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29c22f6a-a168-499b-a599-fe62be2b8e5d-catalog-content\") pod \"29c22f6a-a168-499b-a599-fe62be2b8e5d\" (UID: \"29c22f6a-a168-499b-a599-fe62be2b8e5d\") " Feb 24 09:56:14 crc kubenswrapper[4822]: I0224 09:56:14.812335 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfxs9\" (UniqueName: \"kubernetes.io/projected/29c22f6a-a168-499b-a599-fe62be2b8e5d-kube-api-access-mfxs9\") pod \"29c22f6a-a168-499b-a599-fe62be2b8e5d\" (UID: \"29c22f6a-a168-499b-a599-fe62be2b8e5d\") " Feb 24 09:56:14 crc kubenswrapper[4822]: I0224 09:56:14.813634 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29c22f6a-a168-499b-a599-fe62be2b8e5d-utilities" (OuterVolumeSpecName: "utilities") pod "29c22f6a-a168-499b-a599-fe62be2b8e5d" (UID: "29c22f6a-a168-499b-a599-fe62be2b8e5d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:56:14 crc kubenswrapper[4822]: I0224 09:56:14.819943 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29c22f6a-a168-499b-a599-fe62be2b8e5d-kube-api-access-mfxs9" (OuterVolumeSpecName: "kube-api-access-mfxs9") pod "29c22f6a-a168-499b-a599-fe62be2b8e5d" (UID: "29c22f6a-a168-499b-a599-fe62be2b8e5d"). InnerVolumeSpecName "kube-api-access-mfxs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:14 crc kubenswrapper[4822]: I0224 09:56:14.885728 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29c22f6a-a168-499b-a599-fe62be2b8e5d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "29c22f6a-a168-499b-a599-fe62be2b8e5d" (UID: "29c22f6a-a168-499b-a599-fe62be2b8e5d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:56:14 crc kubenswrapper[4822]: I0224 09:56:14.914815 4822 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29c22f6a-a168-499b-a599-fe62be2b8e5d-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:14 crc kubenswrapper[4822]: I0224 09:56:14.914855 4822 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29c22f6a-a168-499b-a599-fe62be2b8e5d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:14 crc kubenswrapper[4822]: I0224 09:56:14.914870 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfxs9\" (UniqueName: \"kubernetes.io/projected/29c22f6a-a168-499b-a599-fe62be2b8e5d-kube-api-access-mfxs9\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:15 crc kubenswrapper[4822]: I0224 09:56:15.302805 4822 generic.go:334] "Generic (PLEG): container finished" podID="29c22f6a-a168-499b-a599-fe62be2b8e5d" containerID="d8eb02c892380eeff83a93d9aa3bc8e4c60c9ab4acdec249197f69a8ce2b0a58" exitCode=0 Feb 24 09:56:15 crc kubenswrapper[4822]: I0224 09:56:15.302872 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhtfm" event={"ID":"29c22f6a-a168-499b-a599-fe62be2b8e5d","Type":"ContainerDied","Data":"d8eb02c892380eeff83a93d9aa3bc8e4c60c9ab4acdec249197f69a8ce2b0a58"} Feb 24 09:56:15 crc kubenswrapper[4822]: I0224 09:56:15.302889 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhtfm" Feb 24 09:56:15 crc kubenswrapper[4822]: I0224 09:56:15.302943 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhtfm" event={"ID":"29c22f6a-a168-499b-a599-fe62be2b8e5d","Type":"ContainerDied","Data":"f9ca2b7311f24c1dae441402325b6838f4ca0fa6d02d7489fc78259484475ca7"} Feb 24 09:56:15 crc kubenswrapper[4822]: I0224 09:56:15.302970 4822 scope.go:117] "RemoveContainer" containerID="d8eb02c892380eeff83a93d9aa3bc8e4c60c9ab4acdec249197f69a8ce2b0a58" Feb 24 09:56:15 crc kubenswrapper[4822]: I0224 09:56:15.333607 4822 scope.go:117] "RemoveContainer" containerID="701c15933e031ab54882193531f53ec42d64d0852b105c3303647cc8e47f4e49" Feb 24 09:56:15 crc kubenswrapper[4822]: I0224 09:56:15.345868 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mhtfm"] Feb 24 09:56:15 crc kubenswrapper[4822]: I0224 09:56:15.363278 4822 scope.go:117] "RemoveContainer" containerID="41c930fa7f636abcd3976e1198375f1b0e39e98c590a4628739faefcd3513051" Feb 24 09:56:15 crc kubenswrapper[4822]: I0224 09:56:15.374327 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mhtfm"] Feb 24 09:56:15 crc kubenswrapper[4822]: I0224 09:56:15.423602 4822 scope.go:117] "RemoveContainer" containerID="d8eb02c892380eeff83a93d9aa3bc8e4c60c9ab4acdec249197f69a8ce2b0a58" Feb 24 09:56:15 crc kubenswrapper[4822]: E0224 09:56:15.424100 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8eb02c892380eeff83a93d9aa3bc8e4c60c9ab4acdec249197f69a8ce2b0a58\": container with ID starting with d8eb02c892380eeff83a93d9aa3bc8e4c60c9ab4acdec249197f69a8ce2b0a58 not found: ID does not exist" containerID="d8eb02c892380eeff83a93d9aa3bc8e4c60c9ab4acdec249197f69a8ce2b0a58" Feb 24 09:56:15 crc kubenswrapper[4822]: I0224 09:56:15.424155 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8eb02c892380eeff83a93d9aa3bc8e4c60c9ab4acdec249197f69a8ce2b0a58"} err="failed to get container status \"d8eb02c892380eeff83a93d9aa3bc8e4c60c9ab4acdec249197f69a8ce2b0a58\": rpc error: code = NotFound desc = could not find container \"d8eb02c892380eeff83a93d9aa3bc8e4c60c9ab4acdec249197f69a8ce2b0a58\": container with ID starting with d8eb02c892380eeff83a93d9aa3bc8e4c60c9ab4acdec249197f69a8ce2b0a58 not found: ID does not exist" Feb 24 09:56:15 crc kubenswrapper[4822]: I0224 09:56:15.424191 4822 scope.go:117] "RemoveContainer" containerID="701c15933e031ab54882193531f53ec42d64d0852b105c3303647cc8e47f4e49" Feb 24 09:56:15 crc kubenswrapper[4822]: E0224 09:56:15.424490 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"701c15933e031ab54882193531f53ec42d64d0852b105c3303647cc8e47f4e49\": container with ID starting with 701c15933e031ab54882193531f53ec42d64d0852b105c3303647cc8e47f4e49 not found: ID does not exist" containerID="701c15933e031ab54882193531f53ec42d64d0852b105c3303647cc8e47f4e49" Feb 24 09:56:15 crc kubenswrapper[4822]: I0224 09:56:15.424533 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"701c15933e031ab54882193531f53ec42d64d0852b105c3303647cc8e47f4e49"} err="failed to get container status \"701c15933e031ab54882193531f53ec42d64d0852b105c3303647cc8e47f4e49\": rpc error: code = NotFound desc = could not find container \"701c15933e031ab54882193531f53ec42d64d0852b105c3303647cc8e47f4e49\": container with ID starting with 701c15933e031ab54882193531f53ec42d64d0852b105c3303647cc8e47f4e49 not found: ID does not exist" Feb 24 09:56:15 crc kubenswrapper[4822]: I0224 09:56:15.424556 4822 scope.go:117] "RemoveContainer" containerID="41c930fa7f636abcd3976e1198375f1b0e39e98c590a4628739faefcd3513051" Feb 24 09:56:15 crc kubenswrapper[4822]: E0224 09:56:15.424853 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41c930fa7f636abcd3976e1198375f1b0e39e98c590a4628739faefcd3513051\": container with ID starting with 41c930fa7f636abcd3976e1198375f1b0e39e98c590a4628739faefcd3513051 not found: ID does not exist" containerID="41c930fa7f636abcd3976e1198375f1b0e39e98c590a4628739faefcd3513051" Feb 24 09:56:15 crc kubenswrapper[4822]: I0224 09:56:15.424894 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41c930fa7f636abcd3976e1198375f1b0e39e98c590a4628739faefcd3513051"} err="failed to get container status \"41c930fa7f636abcd3976e1198375f1b0e39e98c590a4628739faefcd3513051\": rpc error: code = NotFound desc = could not find container \"41c930fa7f636abcd3976e1198375f1b0e39e98c590a4628739faefcd3513051\": container with ID starting with 41c930fa7f636abcd3976e1198375f1b0e39e98c590a4628739faefcd3513051 not found: ID does not exist" Feb 24 09:56:16 crc kubenswrapper[4822]: I0224 09:56:16.346839 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29c22f6a-a168-499b-a599-fe62be2b8e5d" path="/var/lib/kubelet/pods/29c22f6a-a168-499b-a599-fe62be2b8e5d/volumes" Feb 24 09:56:16 crc kubenswrapper[4822]: I0224 09:56:16.349633 4822 ???:1] "http: TLS handshake error from 192.168.126.11:55270: no serving certificate available for the kubelet" Feb 24 09:56:16 crc kubenswrapper[4822]: I0224 09:56:16.410810 4822 ???:1] "http: TLS handshake error from 192.168.126.11:55276: no serving certificate available for the kubelet" Feb 24 09:56:18 crc kubenswrapper[4822]: I0224 09:56:18.100493 4822 ???:1] "http: TLS handshake error from 192.168.126.11:55284: no serving certificate available for the kubelet" Feb 24 09:56:19 crc kubenswrapper[4822]: I0224 09:56:19.394687 4822 ???:1] "http: TLS handshake error from 192.168.126.11:55292: no serving certificate available for the kubelet" Feb 24 09:56:19 crc kubenswrapper[4822]: I0224 09:56:19.455029 4822 ???:1] "http: TLS handshake error from 192.168.126.11:55298: no serving certificate available for the kubelet" Feb 24 09:56:22 crc kubenswrapper[4822]: I0224 09:56:22.443992 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36126: no serving certificate available for the kubelet" Feb 24 09:56:22 crc kubenswrapper[4822]: I0224 09:56:22.501962 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36128: no serving certificate available for the kubelet" Feb 24 09:56:25 crc kubenswrapper[4822]: I0224 09:56:25.502989 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36132: no serving certificate available for the kubelet" Feb 24 09:56:25 crc kubenswrapper[4822]: I0224 09:56:25.615104 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36144: no serving certificate available for the kubelet" Feb 24 09:56:28 crc kubenswrapper[4822]: I0224 09:56:28.537039 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36146: no serving certificate available for the kubelet" Feb 24 09:56:28 crc kubenswrapper[4822]: I0224 09:56:28.649200 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36156: no serving certificate available for the kubelet" Feb 24 09:56:31 crc kubenswrapper[4822]: I0224 09:56:31.576484 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45374: no serving certificate available for the kubelet" Feb 24 09:56:31 crc kubenswrapper[4822]: I0224 09:56:31.694884 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45390: no serving certificate available for the kubelet" Feb 24 09:56:34 crc kubenswrapper[4822]: I0224 09:56:34.634368 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45398: no serving certificate available for the kubelet" Feb 24 09:56:34 crc kubenswrapper[4822]: I0224 09:56:34.740360 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45410: no serving certificate available for the kubelet" Feb 24 09:56:36 crc kubenswrapper[4822]: I0224 09:56:36.110539 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fp7kw"] Feb 24 09:56:36 crc kubenswrapper[4822]: E0224 09:56:36.110986 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29c22f6a-a168-499b-a599-fe62be2b8e5d" containerName="extract-utilities" Feb 24 09:56:36 crc kubenswrapper[4822]: I0224 09:56:36.111002 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="29c22f6a-a168-499b-a599-fe62be2b8e5d" containerName="extract-utilities" Feb 24 09:56:36 crc kubenswrapper[4822]: E0224 09:56:36.111022 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46fcc60c-5dd5-49d9-8203-7cee5c4bde97" containerName="registry-server" Feb 24 09:56:36 crc kubenswrapper[4822]: I0224 09:56:36.111032 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="46fcc60c-5dd5-49d9-8203-7cee5c4bde97" containerName="registry-server" Feb 24 09:56:36 crc kubenswrapper[4822]: E0224 09:56:36.111055 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e168f6db-20a2-4746-9d95-655e12c3928a" containerName="container-00" Feb 24 09:56:36 crc kubenswrapper[4822]: I0224 09:56:36.111066 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="e168f6db-20a2-4746-9d95-655e12c3928a" containerName="container-00" Feb 24 09:56:36 crc kubenswrapper[4822]: E0224 09:56:36.111083 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46fcc60c-5dd5-49d9-8203-7cee5c4bde97" containerName="extract-utilities" Feb 24 09:56:36 crc kubenswrapper[4822]: I0224 09:56:36.111092 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="46fcc60c-5dd5-49d9-8203-7cee5c4bde97" containerName="extract-utilities" Feb 24 09:56:36 crc kubenswrapper[4822]: E0224 09:56:36.111108 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46fcc60c-5dd5-49d9-8203-7cee5c4bde97" containerName="extract-content" Feb 24 09:56:36 crc kubenswrapper[4822]: I0224 09:56:36.111117 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="46fcc60c-5dd5-49d9-8203-7cee5c4bde97" containerName="extract-content" Feb 24 09:56:36 crc kubenswrapper[4822]: E0224 09:56:36.111134 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29c22f6a-a168-499b-a599-fe62be2b8e5d" containerName="extract-content" Feb 24 09:56:36 crc kubenswrapper[4822]: I0224 09:56:36.111145 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="29c22f6a-a168-499b-a599-fe62be2b8e5d" containerName="extract-content" Feb 24 09:56:36 crc kubenswrapper[4822]: E0224 09:56:36.111158 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29c22f6a-a168-499b-a599-fe62be2b8e5d" containerName="registry-server" Feb 24 09:56:36 crc kubenswrapper[4822]: I0224 09:56:36.111167 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="29c22f6a-a168-499b-a599-fe62be2b8e5d" containerName="registry-server" Feb 24 09:56:36 crc kubenswrapper[4822]: I0224 09:56:36.111419 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="29c22f6a-a168-499b-a599-fe62be2b8e5d" containerName="registry-server" Feb 24 09:56:36 crc kubenswrapper[4822]: I0224 09:56:36.111440 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="46fcc60c-5dd5-49d9-8203-7cee5c4bde97" containerName="registry-server" Feb 24 09:56:36 crc kubenswrapper[4822]: I0224 09:56:36.111466 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="e168f6db-20a2-4746-9d95-655e12c3928a" containerName="container-00" Feb 24 09:56:36 crc kubenswrapper[4822]: I0224 09:56:36.113104 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fp7kw" Feb 24 09:56:36 crc kubenswrapper[4822]: I0224 09:56:36.140054 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fp7kw"] Feb 24 09:56:36 crc kubenswrapper[4822]: I0224 09:56:36.293900 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de3523eb-cfe0-4994-9342-9b59c22dc6b4-utilities\") pod \"community-operators-fp7kw\" (UID: \"de3523eb-cfe0-4994-9342-9b59c22dc6b4\") " pod="openshift-marketplace/community-operators-fp7kw" Feb 24 09:56:36 crc kubenswrapper[4822]: I0224 09:56:36.294263 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de3523eb-cfe0-4994-9342-9b59c22dc6b4-catalog-content\") pod \"community-operators-fp7kw\" (UID: \"de3523eb-cfe0-4994-9342-9b59c22dc6b4\") " pod="openshift-marketplace/community-operators-fp7kw" Feb 24 09:56:36 crc kubenswrapper[4822]: I0224 09:56:36.294369 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx5g2\" (UniqueName: \"kubernetes.io/projected/de3523eb-cfe0-4994-9342-9b59c22dc6b4-kube-api-access-kx5g2\") pod \"community-operators-fp7kw\" (UID: \"de3523eb-cfe0-4994-9342-9b59c22dc6b4\") " pod="openshift-marketplace/community-operators-fp7kw" Feb 24 09:56:36 crc kubenswrapper[4822]: I0224 09:56:36.396251 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de3523eb-cfe0-4994-9342-9b59c22dc6b4-utilities\") pod \"community-operators-fp7kw\" (UID: \"de3523eb-cfe0-4994-9342-9b59c22dc6b4\") " pod="openshift-marketplace/community-operators-fp7kw" Feb 24 09:56:36 crc kubenswrapper[4822]: I0224 09:56:36.396308 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de3523eb-cfe0-4994-9342-9b59c22dc6b4-catalog-content\") pod \"community-operators-fp7kw\" (UID: \"de3523eb-cfe0-4994-9342-9b59c22dc6b4\") " pod="openshift-marketplace/community-operators-fp7kw" Feb 24 09:56:36 crc kubenswrapper[4822]: I0224 09:56:36.396391 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx5g2\" (UniqueName: \"kubernetes.io/projected/de3523eb-cfe0-4994-9342-9b59c22dc6b4-kube-api-access-kx5g2\") pod \"community-operators-fp7kw\" (UID: \"de3523eb-cfe0-4994-9342-9b59c22dc6b4\") " pod="openshift-marketplace/community-operators-fp7kw" Feb 24 09:56:36 crc kubenswrapper[4822]: I0224 09:56:36.396844 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de3523eb-cfe0-4994-9342-9b59c22dc6b4-utilities\") pod \"community-operators-fp7kw\" (UID: \"de3523eb-cfe0-4994-9342-9b59c22dc6b4\") " pod="openshift-marketplace/community-operators-fp7kw" Feb 24 09:56:36 crc kubenswrapper[4822]: I0224 09:56:36.396943 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de3523eb-cfe0-4994-9342-9b59c22dc6b4-catalog-content\") pod \"community-operators-fp7kw\" (UID: \"de3523eb-cfe0-4994-9342-9b59c22dc6b4\") " pod="openshift-marketplace/community-operators-fp7kw" Feb 24 09:56:36 crc kubenswrapper[4822]: I0224 09:56:36.414057 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx5g2\" (UniqueName: \"kubernetes.io/projected/de3523eb-cfe0-4994-9342-9b59c22dc6b4-kube-api-access-kx5g2\") pod \"community-operators-fp7kw\" (UID: \"de3523eb-cfe0-4994-9342-9b59c22dc6b4\") " pod="openshift-marketplace/community-operators-fp7kw" Feb 24 09:56:36 crc kubenswrapper[4822]: I0224 09:56:36.437853 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fp7kw" Feb 24 09:56:37 crc kubenswrapper[4822]: I0224 09:56:37.014259 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fp7kw"] Feb 24 09:56:37 crc kubenswrapper[4822]: I0224 09:56:37.499475 4822 generic.go:334] "Generic (PLEG): container finished" podID="de3523eb-cfe0-4994-9342-9b59c22dc6b4" containerID="5d26b777202cba0f1ec454ba907d3ff96070a666f0b99eb171e5524bc7bb798b" exitCode=0 Feb 24 09:56:37 crc kubenswrapper[4822]: I0224 09:56:37.499820 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fp7kw" event={"ID":"de3523eb-cfe0-4994-9342-9b59c22dc6b4","Type":"ContainerDied","Data":"5d26b777202cba0f1ec454ba907d3ff96070a666f0b99eb171e5524bc7bb798b"} Feb 24 09:56:37 crc kubenswrapper[4822]: I0224 09:56:37.499849 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fp7kw" event={"ID":"de3523eb-cfe0-4994-9342-9b59c22dc6b4","Type":"ContainerStarted","Data":"99255d7cdc99639a771c39ec841f2d6ca9540b47ca30781e922dc22e85c2ed0e"} Feb 24 09:56:37 crc kubenswrapper[4822]: I0224 09:56:37.678264 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45420: no serving certificate available for the kubelet" Feb 24 09:56:37 crc kubenswrapper[4822]: I0224 09:56:37.778075 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45422: no serving certificate available for the kubelet" Feb 24 09:56:38 crc kubenswrapper[4822]: I0224 09:56:38.509527 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fp7kw" event={"ID":"de3523eb-cfe0-4994-9342-9b59c22dc6b4","Type":"ContainerStarted","Data":"118906bd4ada2d8d4870421a27a26a230cf421c3c056de95b4012d391e3d4452"} Feb 24 09:56:38 crc kubenswrapper[4822]: I0224 09:56:38.607105 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45434: no serving certificate available for the kubelet" Feb 24 09:56:39 crc kubenswrapper[4822]: I0224 09:56:39.525124 4822 generic.go:334] "Generic (PLEG): container finished" podID="de3523eb-cfe0-4994-9342-9b59c22dc6b4" containerID="118906bd4ada2d8d4870421a27a26a230cf421c3c056de95b4012d391e3d4452" exitCode=0 Feb 24 09:56:39 crc kubenswrapper[4822]: I0224 09:56:39.525364 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fp7kw" event={"ID":"de3523eb-cfe0-4994-9342-9b59c22dc6b4","Type":"ContainerDied","Data":"118906bd4ada2d8d4870421a27a26a230cf421c3c056de95b4012d391e3d4452"} Feb 24 09:56:40 crc kubenswrapper[4822]: I0224 09:56:40.538306 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fp7kw" event={"ID":"de3523eb-cfe0-4994-9342-9b59c22dc6b4","Type":"ContainerStarted","Data":"cad288a988ee5c26405dd1c9bfadfad0d96292b966864f1ec8d55e7c087a7a1b"} Feb 24 09:56:40 crc kubenswrapper[4822]: I0224 09:56:40.579667 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fp7kw" podStartSLOduration=2.101276055 podStartE2EDuration="4.579642821s" podCreationTimestamp="2026-02-24 09:56:36 +0000 UTC" firstStartedPulling="2026-02-24 09:56:37.501119021 +0000 UTC m=+2919.888881579" lastFinishedPulling="2026-02-24 09:56:39.979485787 +0000 UTC m=+2922.367248345" observedRunningTime="2026-02-24 09:56:40.566206155 +0000 UTC m=+2922.953968743" watchObservedRunningTime="2026-02-24 09:56:40.579642821 +0000 UTC m=+2922.967405379" Feb 24 09:56:40 crc kubenswrapper[4822]: I0224 09:56:40.742622 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45446: no serving certificate available for the kubelet" Feb 24 09:56:40 crc kubenswrapper[4822]: I0224 09:56:40.825041 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45462: no serving certificate available for the kubelet" Feb 24 09:56:42 crc kubenswrapper[4822]: I0224 09:56:42.383602 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36514: no serving certificate available for the kubelet" Feb 24 09:56:42 crc kubenswrapper[4822]: I0224 09:56:42.529949 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36524: no serving certificate available for the kubelet" Feb 24 09:56:42 crc kubenswrapper[4822]: I0224 09:56:42.600269 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36530: no serving certificate available for the kubelet" Feb 24 09:56:42 crc kubenswrapper[4822]: I0224 09:56:42.636862 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36532: no serving certificate available for the kubelet" Feb 24 09:56:42 crc kubenswrapper[4822]: I0224 09:56:42.738982 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36534: no serving certificate available for the kubelet" Feb 24 09:56:42 crc kubenswrapper[4822]: I0224 09:56:42.829739 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36544: no serving certificate available for the kubelet" Feb 24 09:56:42 crc kubenswrapper[4822]: I0224 09:56:42.987881 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36560: no serving certificate available for the kubelet" Feb 24 09:56:43 crc kubenswrapper[4822]: I0224 09:56:43.018284 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36574: no serving certificate available for the kubelet" Feb 24 09:56:43 crc kubenswrapper[4822]: I0224 09:56:43.043319 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36582: no serving certificate available for the kubelet" Feb 24 09:56:43 crc kubenswrapper[4822]: I0224 09:56:43.162137 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36584: no serving certificate available for the kubelet" Feb 24 09:56:43 crc kubenswrapper[4822]: I0224 09:56:43.316331 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36588: no serving certificate available for the kubelet" Feb 24 09:56:43 crc kubenswrapper[4822]: I0224 09:56:43.322629 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36590: no serving certificate available for the kubelet" Feb 24 09:56:43 crc kubenswrapper[4822]: I0224 09:56:43.353342 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36594: no serving certificate available for the kubelet" Feb 24 09:56:43 crc kubenswrapper[4822]: I0224 09:56:43.508957 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36608: no serving certificate available for the kubelet" Feb 24 09:56:43 crc kubenswrapper[4822]: I0224 09:56:43.522022 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36612: no serving certificate available for the kubelet" Feb 24 09:56:43 crc kubenswrapper[4822]: I0224 09:56:43.678747 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36620: no serving certificate available for the kubelet" Feb 24 09:56:43 crc kubenswrapper[4822]: I0224 09:56:43.778230 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36634: no serving certificate available for the kubelet" Feb 24 09:56:43 crc kubenswrapper[4822]: I0224 09:56:43.828340 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36650: no serving certificate available for the kubelet" Feb 24 09:56:43 crc kubenswrapper[4822]: I0224 09:56:43.847371 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36656: no serving certificate available for the kubelet" Feb 24 09:56:43 crc kubenswrapper[4822]: I0224 09:56:43.859069 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36662: no serving certificate available for the kubelet" Feb 24 09:56:43 crc kubenswrapper[4822]: I0224 09:56:43.917250 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36664: no serving certificate available for the kubelet" Feb 24 09:56:44 crc kubenswrapper[4822]: I0224 09:56:44.006452 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36670: no serving certificate available for the kubelet" Feb 24 09:56:44 crc kubenswrapper[4822]: I0224 09:56:44.064143 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36678: no serving certificate available for the kubelet" Feb 24 09:56:44 crc kubenswrapper[4822]: I0224 09:56:44.088680 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36692: no serving certificate available for the kubelet" Feb 24 09:56:44 crc kubenswrapper[4822]: I0224 09:56:44.181775 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36708: no serving certificate available for the kubelet" Feb 24 09:56:44 crc kubenswrapper[4822]: I0224 09:56:44.250147 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36716: no serving certificate available for the kubelet" Feb 24 09:56:44 crc kubenswrapper[4822]: I0224 09:56:44.296747 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36720: no serving certificate available for the kubelet" Feb 24 09:56:44 crc kubenswrapper[4822]: I0224 09:56:44.422612 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36728: no serving certificate available for the kubelet" Feb 24 09:56:44 crc kubenswrapper[4822]: I0224 09:56:44.574745 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36732: no serving certificate available for the kubelet" Feb 24 09:56:44 crc kubenswrapper[4822]: I0224 09:56:44.589615 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36736: no serving certificate available for the kubelet" Feb 24 09:56:44 crc kubenswrapper[4822]: I0224 09:56:44.642314 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36740: no serving certificate available for the kubelet" Feb 24 09:56:44 crc kubenswrapper[4822]: I0224 09:56:44.801991 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36744: no serving certificate available for the kubelet" Feb 24 09:56:44 crc kubenswrapper[4822]: I0224 09:56:44.821688 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36758: no serving certificate available for the kubelet" Feb 24 09:56:44 crc kubenswrapper[4822]: I0224 09:56:44.852878 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36770: no serving certificate available for the kubelet" Feb 24 09:56:44 crc kubenswrapper[4822]: I0224 09:56:44.998704 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36780: no serving certificate available for the kubelet" Feb 24 09:56:45 crc kubenswrapper[4822]: I0224 09:56:45.022809 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36786: no serving certificate available for the kubelet" Feb 24 09:56:45 crc kubenswrapper[4822]: I0224 09:56:45.038220 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36790: no serving certificate available for the kubelet" Feb 24 09:56:45 crc kubenswrapper[4822]: I0224 09:56:45.173090 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36806: no serving certificate available for the kubelet" Feb 24 09:56:45 crc kubenswrapper[4822]: I0224 09:56:45.175494 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36820: no serving certificate available for the kubelet" Feb 24 09:56:45 crc kubenswrapper[4822]: I0224 09:56:45.197502 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36830: no serving certificate available for the kubelet" Feb 24 09:56:45 crc kubenswrapper[4822]: I0224 09:56:45.259385 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36832: no serving certificate available for the kubelet" Feb 24 09:56:45 crc kubenswrapper[4822]: I0224 09:56:45.370526 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36842: no serving certificate available for the kubelet" Feb 24 09:56:45 crc kubenswrapper[4822]: I0224 09:56:45.407049 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36846: no serving certificate available for the kubelet" Feb 24 09:56:45 crc kubenswrapper[4822]: I0224 09:56:45.409385 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36858: no serving certificate available for the kubelet" Feb 24 09:56:45 crc kubenswrapper[4822]: I0224 09:56:45.445404 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36868: no serving certificate available for the kubelet" Feb 24 09:56:45 crc kubenswrapper[4822]: I0224 09:56:45.551246 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36876: no serving certificate available for the kubelet" Feb 24 09:56:45 crc kubenswrapper[4822]: I0224 09:56:45.583236 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36890: no serving certificate available for the kubelet" Feb 24 09:56:45 crc kubenswrapper[4822]: I0224 09:56:45.613673 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36902: no serving certificate available for the kubelet" Feb 24 09:56:45 crc kubenswrapper[4822]: I0224 09:56:45.631962 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36910: no serving certificate available for the kubelet" Feb 24 09:56:46 crc kubenswrapper[4822]: I0224 09:56:46.438772 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fp7kw" Feb 24 09:56:46 crc kubenswrapper[4822]: I0224 09:56:46.438829 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fp7kw" Feb 24 09:56:46 crc kubenswrapper[4822]: I0224 09:56:46.486881 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fp7kw" Feb 24 09:56:46 crc kubenswrapper[4822]: I0224 09:56:46.625069 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fp7kw" Feb 24 09:56:46 crc kubenswrapper[4822]: I0224 09:56:46.715301 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fp7kw"] Feb 24 09:56:46 crc kubenswrapper[4822]: I0224 09:56:46.818160 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36918: no serving certificate available for the kubelet" Feb 24 09:56:46 crc kubenswrapper[4822]: I0224 09:56:46.894287 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36934: no serving certificate available for the kubelet" Feb 24 09:56:48 crc kubenswrapper[4822]: I0224 09:56:48.606803 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fp7kw" podUID="de3523eb-cfe0-4994-9342-9b59c22dc6b4" containerName="registry-server" containerID="cri-o://cad288a988ee5c26405dd1c9bfadfad0d96292b966864f1ec8d55e7c087a7a1b" gracePeriod=2 Feb 24 09:56:49 crc kubenswrapper[4822]: I0224 09:56:49.077268 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fp7kw" Feb 24 09:56:49 crc kubenswrapper[4822]: I0224 09:56:49.139935 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de3523eb-cfe0-4994-9342-9b59c22dc6b4-catalog-content\") pod \"de3523eb-cfe0-4994-9342-9b59c22dc6b4\" (UID: \"de3523eb-cfe0-4994-9342-9b59c22dc6b4\") " Feb 24 09:56:49 crc kubenswrapper[4822]: I0224 09:56:49.196130 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de3523eb-cfe0-4994-9342-9b59c22dc6b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de3523eb-cfe0-4994-9342-9b59c22dc6b4" (UID: "de3523eb-cfe0-4994-9342-9b59c22dc6b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:56:49 crc kubenswrapper[4822]: I0224 09:56:49.241044 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de3523eb-cfe0-4994-9342-9b59c22dc6b4-utilities\") pod \"de3523eb-cfe0-4994-9342-9b59c22dc6b4\" (UID: \"de3523eb-cfe0-4994-9342-9b59c22dc6b4\") " Feb 24 09:56:49 crc kubenswrapper[4822]: I0224 09:56:49.241088 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kx5g2\" (UniqueName: \"kubernetes.io/projected/de3523eb-cfe0-4994-9342-9b59c22dc6b4-kube-api-access-kx5g2\") pod \"de3523eb-cfe0-4994-9342-9b59c22dc6b4\" (UID: \"de3523eb-cfe0-4994-9342-9b59c22dc6b4\") " Feb 24 09:56:49 crc kubenswrapper[4822]: I0224 09:56:49.241456 4822 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de3523eb-cfe0-4994-9342-9b59c22dc6b4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:49 crc kubenswrapper[4822]: I0224 09:56:49.241881 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de3523eb-cfe0-4994-9342-9b59c22dc6b4-utilities" (OuterVolumeSpecName: "utilities") pod "de3523eb-cfe0-4994-9342-9b59c22dc6b4" (UID: "de3523eb-cfe0-4994-9342-9b59c22dc6b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:56:49 crc kubenswrapper[4822]: I0224 09:56:49.246337 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de3523eb-cfe0-4994-9342-9b59c22dc6b4-kube-api-access-kx5g2" (OuterVolumeSpecName: "kube-api-access-kx5g2") pod "de3523eb-cfe0-4994-9342-9b59c22dc6b4" (UID: "de3523eb-cfe0-4994-9342-9b59c22dc6b4"). InnerVolumeSpecName "kube-api-access-kx5g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:49 crc kubenswrapper[4822]: I0224 09:56:49.343132 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kx5g2\" (UniqueName: \"kubernetes.io/projected/de3523eb-cfe0-4994-9342-9b59c22dc6b4-kube-api-access-kx5g2\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:49 crc kubenswrapper[4822]: I0224 09:56:49.343162 4822 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de3523eb-cfe0-4994-9342-9b59c22dc6b4-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:49 crc kubenswrapper[4822]: I0224 09:56:49.614332 4822 generic.go:334] "Generic (PLEG): container finished" podID="de3523eb-cfe0-4994-9342-9b59c22dc6b4" containerID="cad288a988ee5c26405dd1c9bfadfad0d96292b966864f1ec8d55e7c087a7a1b" exitCode=0 Feb 24 09:56:49 crc kubenswrapper[4822]: I0224 09:56:49.614375 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fp7kw" event={"ID":"de3523eb-cfe0-4994-9342-9b59c22dc6b4","Type":"ContainerDied","Data":"cad288a988ee5c26405dd1c9bfadfad0d96292b966864f1ec8d55e7c087a7a1b"} Feb 24 09:56:49 crc kubenswrapper[4822]: I0224 09:56:49.614402 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fp7kw" event={"ID":"de3523eb-cfe0-4994-9342-9b59c22dc6b4","Type":"ContainerDied","Data":"99255d7cdc99639a771c39ec841f2d6ca9540b47ca30781e922dc22e85c2ed0e"} Feb 24 09:56:49 crc kubenswrapper[4822]: I0224 09:56:49.614403 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fp7kw" Feb 24 09:56:49 crc kubenswrapper[4822]: I0224 09:56:49.614420 4822 scope.go:117] "RemoveContainer" containerID="cad288a988ee5c26405dd1c9bfadfad0d96292b966864f1ec8d55e7c087a7a1b" Feb 24 09:56:49 crc kubenswrapper[4822]: I0224 09:56:49.630881 4822 scope.go:117] "RemoveContainer" containerID="118906bd4ada2d8d4870421a27a26a230cf421c3c056de95b4012d391e3d4452" Feb 24 09:56:49 crc kubenswrapper[4822]: I0224 09:56:49.645202 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fp7kw"] Feb 24 09:56:49 crc kubenswrapper[4822]: I0224 09:56:49.652171 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fp7kw"] Feb 24 09:56:49 crc kubenswrapper[4822]: I0224 09:56:49.657131 4822 scope.go:117] "RemoveContainer" containerID="5d26b777202cba0f1ec454ba907d3ff96070a666f0b99eb171e5524bc7bb798b" Feb 24 09:56:49 crc kubenswrapper[4822]: I0224 09:56:49.685147 4822 scope.go:117] "RemoveContainer" containerID="cad288a988ee5c26405dd1c9bfadfad0d96292b966864f1ec8d55e7c087a7a1b" Feb 24 09:56:49 crc kubenswrapper[4822]: E0224 09:56:49.685548 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cad288a988ee5c26405dd1c9bfadfad0d96292b966864f1ec8d55e7c087a7a1b\": container with ID starting with cad288a988ee5c26405dd1c9bfadfad0d96292b966864f1ec8d55e7c087a7a1b not found: ID does not exist" containerID="cad288a988ee5c26405dd1c9bfadfad0d96292b966864f1ec8d55e7c087a7a1b" Feb 24 09:56:49 crc kubenswrapper[4822]: I0224 09:56:49.685578 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cad288a988ee5c26405dd1c9bfadfad0d96292b966864f1ec8d55e7c087a7a1b"} err="failed to get container status \"cad288a988ee5c26405dd1c9bfadfad0d96292b966864f1ec8d55e7c087a7a1b\": rpc error: code = NotFound desc = could not find container \"cad288a988ee5c26405dd1c9bfadfad0d96292b966864f1ec8d55e7c087a7a1b\": container with ID starting with cad288a988ee5c26405dd1c9bfadfad0d96292b966864f1ec8d55e7c087a7a1b not found: ID does not exist" Feb 24 09:56:49 crc kubenswrapper[4822]: I0224 09:56:49.685595 4822 scope.go:117] "RemoveContainer" containerID="118906bd4ada2d8d4870421a27a26a230cf421c3c056de95b4012d391e3d4452" Feb 24 09:56:49 crc kubenswrapper[4822]: E0224 09:56:49.685875 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"118906bd4ada2d8d4870421a27a26a230cf421c3c056de95b4012d391e3d4452\": container with ID starting with 118906bd4ada2d8d4870421a27a26a230cf421c3c056de95b4012d391e3d4452 not found: ID does not exist" containerID="118906bd4ada2d8d4870421a27a26a230cf421c3c056de95b4012d391e3d4452" Feb 24 09:56:49 crc kubenswrapper[4822]: I0224 09:56:49.685927 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"118906bd4ada2d8d4870421a27a26a230cf421c3c056de95b4012d391e3d4452"} err="failed to get container status \"118906bd4ada2d8d4870421a27a26a230cf421c3c056de95b4012d391e3d4452\": rpc error: code = NotFound desc = could not find container \"118906bd4ada2d8d4870421a27a26a230cf421c3c056de95b4012d391e3d4452\": container with ID starting with 118906bd4ada2d8d4870421a27a26a230cf421c3c056de95b4012d391e3d4452 not found: ID does not exist" Feb 24 09:56:49 crc kubenswrapper[4822]: I0224 09:56:49.685952 4822 scope.go:117] "RemoveContainer" containerID="5d26b777202cba0f1ec454ba907d3ff96070a666f0b99eb171e5524bc7bb798b" Feb 24 09:56:49 crc kubenswrapper[4822]: E0224 09:56:49.686218 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d26b777202cba0f1ec454ba907d3ff96070a666f0b99eb171e5524bc7bb798b\": container with ID starting with 5d26b777202cba0f1ec454ba907d3ff96070a666f0b99eb171e5524bc7bb798b not found: ID does not exist" containerID="5d26b777202cba0f1ec454ba907d3ff96070a666f0b99eb171e5524bc7bb798b" Feb 24 09:56:49 crc kubenswrapper[4822]: I0224 09:56:49.686240 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d26b777202cba0f1ec454ba907d3ff96070a666f0b99eb171e5524bc7bb798b"} err="failed to get container status \"5d26b777202cba0f1ec454ba907d3ff96070a666f0b99eb171e5524bc7bb798b\": rpc error: code = NotFound desc = could not find container \"5d26b777202cba0f1ec454ba907d3ff96070a666f0b99eb171e5524bc7bb798b\": container with ID starting with 5d26b777202cba0f1ec454ba907d3ff96070a666f0b99eb171e5524bc7bb798b not found: ID does not exist" Feb 24 09:56:49 crc kubenswrapper[4822]: I0224 09:56:49.855830 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36936: no serving certificate available for the kubelet" Feb 24 09:56:49 crc kubenswrapper[4822]: I0224 09:56:49.941043 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36940: no serving certificate available for the kubelet" Feb 24 09:56:50 crc kubenswrapper[4822]: I0224 09:56:50.345644 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de3523eb-cfe0-4994-9342-9b59c22dc6b4" path="/var/lib/kubelet/pods/de3523eb-cfe0-4994-9342-9b59c22dc6b4/volumes" Feb 24 09:56:52 crc kubenswrapper[4822]: I0224 09:56:52.907508 4822 ???:1] "http: TLS handshake error from 192.168.126.11:51980: no serving certificate available for the kubelet" Feb 24 09:56:52 crc kubenswrapper[4822]: I0224 09:56:52.990747 4822 ???:1] "http: TLS handshake error from 192.168.126.11:51994: no serving certificate available for the kubelet" Feb 24 09:56:55 crc kubenswrapper[4822]: I0224 09:56:55.948458 4822 ???:1] "http: TLS handshake error from 192.168.126.11:51998: no serving certificate available for the kubelet" Feb 24 09:56:56 crc kubenswrapper[4822]: I0224 09:56:56.027069 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52014: no serving certificate available for the kubelet" Feb 24 09:56:58 crc kubenswrapper[4822]: I0224 09:56:58.992959 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52024: no serving certificate available for the kubelet" Feb 24 09:56:59 crc kubenswrapper[4822]: I0224 09:56:59.072472 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52040: no serving certificate available for the kubelet" Feb 24 09:57:02 crc kubenswrapper[4822]: I0224 09:57:02.025450 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34912: no serving certificate available for the kubelet" Feb 24 09:57:02 crc kubenswrapper[4822]: I0224 09:57:02.106365 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34918: no serving certificate available for the kubelet" Feb 24 09:57:02 crc kubenswrapper[4822]: I0224 09:57:02.754134 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34928: no serving certificate available for the kubelet" Feb 24 09:57:02 crc kubenswrapper[4822]: I0224 09:57:02.885376 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34934: no serving certificate available for the kubelet" Feb 24 09:57:02 crc kubenswrapper[4822]: I0224 09:57:02.921928 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34938: no serving certificate available for the kubelet" Feb 24 09:57:02 crc kubenswrapper[4822]: I0224 09:57:02.927536 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34942: no serving certificate available for the kubelet" Feb 24 09:57:03 crc kubenswrapper[4822]: I0224 09:57:03.083370 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34948: no serving certificate available for the kubelet" Feb 24 09:57:03 crc kubenswrapper[4822]: I0224 09:57:03.101300 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34952: no serving certificate available for the kubelet" Feb 24 09:57:03 crc kubenswrapper[4822]: I0224 09:57:03.136529 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34966: no serving certificate available for the kubelet" Feb 24 09:57:03 crc kubenswrapper[4822]: I0224 09:57:03.250922 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34968: no serving certificate available for the kubelet" Feb 24 09:57:03 crc kubenswrapper[4822]: I0224 09:57:03.288070 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34984: no serving certificate available for the kubelet" Feb 24 09:57:03 crc kubenswrapper[4822]: I0224 09:57:03.422017 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34990: no serving certificate available for the kubelet" Feb 24 09:57:03 crc kubenswrapper[4822]: I0224 09:57:03.497698 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34998: no serving certificate available for the kubelet" Feb 24 09:57:03 crc kubenswrapper[4822]: I0224 09:57:03.603597 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35010: no serving certificate available for the kubelet" Feb 24 09:57:03 crc kubenswrapper[4822]: I0224 09:57:03.650322 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35014: no serving certificate available for the kubelet" Feb 24 09:57:03 crc kubenswrapper[4822]: I0224 09:57:03.833366 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35030: no serving certificate available for the kubelet" Feb 24 09:57:03 crc kubenswrapper[4822]: I0224 09:57:03.894215 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35032: no serving certificate available for the kubelet" Feb 24 09:57:04 crc kubenswrapper[4822]: I0224 09:57:04.037167 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35044: no serving certificate available for the kubelet" Feb 24 09:57:04 crc kubenswrapper[4822]: I0224 09:57:04.103418 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35060: no serving certificate available for the kubelet" Feb 24 09:57:04 crc kubenswrapper[4822]: I0224 09:57:04.202284 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35062: no serving certificate available for the kubelet" Feb 24 09:57:04 crc kubenswrapper[4822]: I0224 09:57:04.289389 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35064: no serving certificate available for the kubelet" Feb 24 09:57:04 crc kubenswrapper[4822]: I0224 09:57:04.380463 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35066: no serving certificate available for the kubelet" Feb 24 09:57:04 crc kubenswrapper[4822]: I0224 09:57:04.479368 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35076: no serving certificate available for the kubelet" Feb 24 09:57:04 crc kubenswrapper[4822]: I0224 09:57:04.552809 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35090: no serving certificate available for the kubelet" Feb 24 09:57:04 crc kubenswrapper[4822]: I0224 09:57:04.621618 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35096: no serving certificate available for the kubelet" Feb 24 09:57:04 crc kubenswrapper[4822]: I0224 09:57:04.752300 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35104: no serving certificate available for the kubelet" Feb 24 09:57:04 crc kubenswrapper[4822]: I0224 09:57:04.817053 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35110: no serving certificate available for the kubelet" Feb 24 09:57:04 crc kubenswrapper[4822]: I0224 09:57:04.949831 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35118: no serving certificate available for the kubelet" Feb 24 09:57:04 crc kubenswrapper[4822]: I0224 09:57:04.975120 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35128: no serving certificate available for the kubelet" Feb 24 09:57:05 crc kubenswrapper[4822]: I0224 09:57:05.064689 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35142: no serving certificate available for the kubelet" Feb 24 09:57:05 crc kubenswrapper[4822]: I0224 09:57:05.146898 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35154: no serving certificate available for the kubelet" Feb 24 09:57:05 crc kubenswrapper[4822]: I0224 09:57:05.157245 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35168: no serving certificate available for the kubelet" Feb 24 09:57:05 crc kubenswrapper[4822]: I0224 09:57:05.219024 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35178: no serving certificate available for the kubelet" Feb 24 09:57:05 crc kubenswrapper[4822]: I0224 09:57:05.309087 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35192: no serving certificate available for the kubelet" Feb 24 09:57:05 crc kubenswrapper[4822]: I0224 09:57:05.410871 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35208: no serving certificate available for the kubelet" Feb 24 09:57:05 crc kubenswrapper[4822]: I0224 09:57:05.543295 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35214: no serving certificate available for the kubelet" Feb 24 09:57:08 crc kubenswrapper[4822]: I0224 09:57:08.100620 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35226: no serving certificate available for the kubelet" Feb 24 09:57:08 crc kubenswrapper[4822]: I0224 09:57:08.180220 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35238: no serving certificate available for the kubelet" Feb 24 09:57:11 crc kubenswrapper[4822]: I0224 09:57:11.149778 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42422: no serving certificate available for the kubelet" Feb 24 09:57:11 crc kubenswrapper[4822]: I0224 09:57:11.219573 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42424: no serving certificate available for the kubelet" Feb 24 09:57:14 crc kubenswrapper[4822]: I0224 09:57:14.190308 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42428: no serving certificate available for the kubelet" Feb 24 09:57:14 crc kubenswrapper[4822]: I0224 09:57:14.281456 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42444: no serving certificate available for the kubelet" Feb 24 09:57:17 crc kubenswrapper[4822]: I0224 09:57:17.234857 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42448: no serving certificate available for the kubelet" Feb 24 09:57:17 crc kubenswrapper[4822]: I0224 09:57:17.336945 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42456: no serving certificate available for the kubelet" Feb 24 09:57:19 crc kubenswrapper[4822]: I0224 09:57:19.594378 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42462: no serving certificate available for the kubelet" Feb 24 09:57:20 crc kubenswrapper[4822]: I0224 09:57:20.285640 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42478: no serving certificate available for the kubelet" Feb 24 09:57:20 crc kubenswrapper[4822]: I0224 09:57:20.374657 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42484: no serving certificate available for the kubelet" Feb 24 09:57:23 crc kubenswrapper[4822]: I0224 09:57:23.355361 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42348: no serving certificate available for the kubelet" Feb 24 09:57:23 crc kubenswrapper[4822]: I0224 09:57:23.432786 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42362: no serving certificate available for the kubelet" Feb 24 09:57:25 crc kubenswrapper[4822]: I0224 09:57:25.617658 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42374: no serving certificate available for the kubelet" Feb 24 09:57:25 crc kubenswrapper[4822]: I0224 09:57:25.757537 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42378: no serving certificate available for the kubelet" Feb 24 09:57:25 crc kubenswrapper[4822]: I0224 09:57:25.770842 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42384: no serving certificate available for the kubelet" Feb 24 09:57:26 crc kubenswrapper[4822]: I0224 09:57:26.422544 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42394: no serving certificate available for the kubelet" Feb 24 09:57:26 crc kubenswrapper[4822]: I0224 09:57:26.496266 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42410: no serving certificate available for the kubelet" Feb 24 09:57:29 crc kubenswrapper[4822]: I0224 09:57:29.461707 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42420: no serving certificate available for the kubelet" Feb 24 09:57:29 crc kubenswrapper[4822]: I0224 09:57:29.535946 4822 ???:1] "http: TLS handshake error from 192.168.126.11:42432: no serving certificate available for the kubelet" Feb 24 09:57:32 crc kubenswrapper[4822]: I0224 09:57:32.504440 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47308: no serving certificate available for the kubelet" Feb 24 09:57:32 crc kubenswrapper[4822]: I0224 09:57:32.589030 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47318: no serving certificate available for the kubelet" Feb 24 09:57:35 crc kubenswrapper[4822]: I0224 09:57:35.568795 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47330: no serving certificate available for the kubelet" Feb 24 09:57:35 crc kubenswrapper[4822]: I0224 09:57:35.651356 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47334: no serving certificate available for the kubelet" Feb 24 09:57:38 crc kubenswrapper[4822]: I0224 09:57:38.622330 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47340: no serving certificate available for the kubelet" Feb 24 09:57:38 crc kubenswrapper[4822]: I0224 09:57:38.710988 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47342: no serving certificate available for the kubelet" Feb 24 09:57:40 crc kubenswrapper[4822]: I0224 09:57:40.313734 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47356: no serving certificate available for the kubelet" Feb 24 09:57:40 crc kubenswrapper[4822]: I0224 09:57:40.463483 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47366: no serving certificate available for the kubelet" Feb 24 09:57:40 crc kubenswrapper[4822]: I0224 09:57:40.498668 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47378: no serving certificate available for the kubelet" Feb 24 09:57:41 crc kubenswrapper[4822]: I0224 09:57:41.671984 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45836: no serving certificate available for the kubelet" Feb 24 09:57:41 crc kubenswrapper[4822]: I0224 09:57:41.752434 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45844: no serving certificate available for the kubelet" Feb 24 09:57:44 crc kubenswrapper[4822]: I0224 09:57:44.727648 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45852: no serving certificate available for the kubelet" Feb 24 09:57:44 crc kubenswrapper[4822]: I0224 09:57:44.808700 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45866: no serving certificate available for the kubelet" Feb 24 09:57:45 crc kubenswrapper[4822]: I0224 09:57:45.676469 4822 patch_prober.go:28] interesting pod/machine-config-daemon-qd752 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:57:45 crc kubenswrapper[4822]: I0224 09:57:45.676835 4822 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:57:47 crc kubenswrapper[4822]: I0224 09:57:47.785057 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45876: no serving certificate available for the kubelet" Feb 24 09:57:47 crc kubenswrapper[4822]: I0224 09:57:47.864405 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45882: no serving certificate available for the kubelet" Feb 24 09:57:50 crc kubenswrapper[4822]: I0224 09:57:50.834255 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45892: no serving certificate available for the kubelet" Feb 24 09:57:50 crc kubenswrapper[4822]: I0224 09:57:50.923529 4822 ???:1] "http: TLS handshake error from 192.168.126.11:45896: no serving certificate available for the kubelet" Feb 24 09:57:53 crc kubenswrapper[4822]: I0224 09:57:53.881243 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49046: no serving certificate available for the kubelet" Feb 24 09:57:53 crc kubenswrapper[4822]: I0224 09:57:53.961785 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49054: no serving certificate available for the kubelet" Feb 24 09:57:54 crc kubenswrapper[4822]: I0224 09:57:54.251381 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49068: no serving certificate available for the kubelet" Feb 24 09:57:54 crc kubenswrapper[4822]: I0224 09:57:54.386123 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49070: no serving certificate available for the kubelet" Feb 24 09:57:54 crc kubenswrapper[4822]: I0224 09:57:54.422555 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49086: no serving certificate available for the kubelet" Feb 24 09:57:54 crc kubenswrapper[4822]: I0224 09:57:54.423566 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49092: no serving certificate available for the kubelet" Feb 24 09:57:54 crc kubenswrapper[4822]: I0224 09:57:54.628264 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49108: no serving certificate available for the kubelet" Feb 24 09:57:54 crc kubenswrapper[4822]: I0224 09:57:54.633979 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49122: no serving certificate available for the kubelet" Feb 24 09:57:56 crc kubenswrapper[4822]: I0224 09:57:56.999478 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49132: no serving certificate available for the kubelet" Feb 24 09:57:57 crc kubenswrapper[4822]: I0224 09:57:57.061215 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49136: no serving certificate available for the kubelet" Feb 24 09:58:00 crc kubenswrapper[4822]: I0224 09:58:00.048518 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49142: no serving certificate available for the kubelet" Feb 24 09:58:00 crc kubenswrapper[4822]: I0224 09:58:00.106299 4822 ???:1] "http: TLS handshake error from 192.168.126.11:49148: no serving certificate available for the kubelet" Feb 24 09:58:03 crc kubenswrapper[4822]: I0224 09:58:03.104657 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36756: no serving certificate available for the kubelet" Feb 24 09:58:03 crc kubenswrapper[4822]: I0224 09:58:03.167836 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36760: no serving certificate available for the kubelet" Feb 24 09:58:06 crc kubenswrapper[4822]: I0224 09:58:06.149808 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36768: no serving certificate available for the kubelet" Feb 24 09:58:06 crc kubenswrapper[4822]: I0224 09:58:06.219745 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36772: no serving certificate available for the kubelet" Feb 24 09:58:09 crc kubenswrapper[4822]: I0224 09:58:09.185861 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36780: no serving certificate available for the kubelet" Feb 24 09:58:09 crc kubenswrapper[4822]: I0224 09:58:09.257191 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36788: no serving certificate available for the kubelet" Feb 24 09:58:12 crc kubenswrapper[4822]: I0224 09:58:12.235673 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54336: no serving certificate available for the kubelet" Feb 24 09:58:12 crc kubenswrapper[4822]: I0224 09:58:12.296888 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54346: no serving certificate available for the kubelet" Feb 24 09:58:14 crc kubenswrapper[4822]: I0224 09:58:14.629488 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/openstack-cell1-galera-0" podUID="3ff049ae-9abb-4477-9f51-eee7228cedfd" containerName="galera" probeResult="failure" output=< Feb 24 09:58:14 crc kubenswrapper[4822]: waiting for gcomm URI Feb 24 09:58:14 crc kubenswrapper[4822]: > Feb 24 09:58:14 crc kubenswrapper[4822]: I0224 09:58:14.629891 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 24 09:58:14 crc kubenswrapper[4822]: I0224 09:58:14.631125 4822 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"7536a9f101dbd2bac88d83679be37627b03c6e965a1c67c7376c010e4551efbc"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed startup probe, will be restarted" Feb 24 09:58:14 crc kubenswrapper[4822]: I0224 09:58:14.708564 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="3ff049ae-9abb-4477-9f51-eee7228cedfd" containerName="galera" containerID="cri-o://7536a9f101dbd2bac88d83679be37627b03c6e965a1c67c7376c010e4551efbc" gracePeriod=30 Feb 24 09:58:14 crc kubenswrapper[4822]: E0224 09:58:14.842710 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-cell1-galera-0_openstack(3ff049ae-9abb-4477-9f51-eee7228cedfd)\"" pod="openstack/openstack-cell1-galera-0" podUID="3ff049ae-9abb-4477-9f51-eee7228cedfd" Feb 24 09:58:15 crc kubenswrapper[4822]: I0224 09:58:15.289202 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54356: no serving certificate available for the kubelet" Feb 24 09:58:15 crc kubenswrapper[4822]: I0224 09:58:15.354597 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54358: no serving certificate available for the kubelet" Feb 24 09:58:15 crc kubenswrapper[4822]: I0224 09:58:15.389167 4822 generic.go:334] "Generic (PLEG): container finished" podID="3ff049ae-9abb-4477-9f51-eee7228cedfd" containerID="7536a9f101dbd2bac88d83679be37627b03c6e965a1c67c7376c010e4551efbc" exitCode=143 Feb 24 09:58:15 crc kubenswrapper[4822]: I0224 09:58:15.389226 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3ff049ae-9abb-4477-9f51-eee7228cedfd","Type":"ContainerDied","Data":"7536a9f101dbd2bac88d83679be37627b03c6e965a1c67c7376c010e4551efbc"} Feb 24 09:58:15 crc kubenswrapper[4822]: I0224 09:58:15.389276 4822 scope.go:117] "RemoveContainer" containerID="e8262553ae45566fa847c362fce596c8e0e36bcbf4f34da176d8086e02e19351" Feb 24 09:58:15 crc kubenswrapper[4822]: I0224 09:58:15.390061 4822 scope.go:117] "RemoveContainer" containerID="7536a9f101dbd2bac88d83679be37627b03c6e965a1c67c7376c010e4551efbc" Feb 24 09:58:15 crc kubenswrapper[4822]: E0224 09:58:15.390413 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-cell1-galera-0_openstack(3ff049ae-9abb-4477-9f51-eee7228cedfd)\"" pod="openstack/openstack-cell1-galera-0" podUID="3ff049ae-9abb-4477-9f51-eee7228cedfd" Feb 24 09:58:15 crc kubenswrapper[4822]: I0224 09:58:15.677145 4822 patch_prober.go:28] interesting pod/machine-config-daemon-qd752 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:58:15 crc kubenswrapper[4822]: I0224 09:58:15.677423 4822 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:58:18 crc kubenswrapper[4822]: I0224 09:58:18.341963 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54374: no serving certificate available for the kubelet" Feb 24 09:58:18 crc kubenswrapper[4822]: I0224 09:58:18.395455 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54384: no serving certificate available for the kubelet" Feb 24 09:58:21 crc kubenswrapper[4822]: I0224 09:58:21.381587 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43690: no serving certificate available for the kubelet" Feb 24 09:58:21 crc kubenswrapper[4822]: I0224 09:58:21.442192 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43698: no serving certificate available for the kubelet" Feb 24 09:58:21 crc kubenswrapper[4822]: I0224 09:58:21.919721 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/openstack-galera-0" podUID="92cdd045-d60c-433d-b2e3-32f93299ee8e" containerName="galera" probeResult="failure" output=< Feb 24 09:58:21 crc kubenswrapper[4822]: waiting for gcomm URI Feb 24 09:58:21 crc kubenswrapper[4822]: > Feb 24 09:58:21 crc kubenswrapper[4822]: I0224 09:58:21.919873 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 24 09:58:21 crc kubenswrapper[4822]: I0224 09:58:21.920666 4822 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"9096168b79cb1ca2340175113d8fd48922513f5c3fc8b043f568f158dcad3412"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed startup probe, will be restarted" Feb 24 09:58:21 crc kubenswrapper[4822]: I0224 09:58:21.997296 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="92cdd045-d60c-433d-b2e3-32f93299ee8e" containerName="galera" containerID="cri-o://9096168b79cb1ca2340175113d8fd48922513f5c3fc8b043f568f158dcad3412" gracePeriod=30 Feb 24 09:58:22 crc kubenswrapper[4822]: E0224 09:58:22.132634 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-galera-0_openstack(92cdd045-d60c-433d-b2e3-32f93299ee8e)\"" pod="openstack/openstack-galera-0" podUID="92cdd045-d60c-433d-b2e3-32f93299ee8e" Feb 24 09:58:22 crc kubenswrapper[4822]: I0224 09:58:22.450525 4822 generic.go:334] "Generic (PLEG): container finished" podID="92cdd045-d60c-433d-b2e3-32f93299ee8e" containerID="9096168b79cb1ca2340175113d8fd48922513f5c3fc8b043f568f158dcad3412" exitCode=143 Feb 24 09:58:22 crc kubenswrapper[4822]: I0224 09:58:22.450598 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"92cdd045-d60c-433d-b2e3-32f93299ee8e","Type":"ContainerDied","Data":"9096168b79cb1ca2340175113d8fd48922513f5c3fc8b043f568f158dcad3412"} Feb 24 09:58:22 crc kubenswrapper[4822]: I0224 09:58:22.450672 4822 scope.go:117] "RemoveContainer" containerID="a2afc6862e9196c8b25ff4a3dadf65069e66cf79d0505828a5b055a7d9ff368e" Feb 24 09:58:22 crc kubenswrapper[4822]: I0224 09:58:22.451784 4822 scope.go:117] "RemoveContainer" containerID="9096168b79cb1ca2340175113d8fd48922513f5c3fc8b043f568f158dcad3412" Feb 24 09:58:22 crc kubenswrapper[4822]: E0224 09:58:22.452354 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-galera-0_openstack(92cdd045-d60c-433d-b2e3-32f93299ee8e)\"" pod="openstack/openstack-galera-0" podUID="92cdd045-d60c-433d-b2e3-32f93299ee8e" Feb 24 09:58:23 crc kubenswrapper[4822]: I0224 09:58:23.057406 4822 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 24 09:58:23 crc kubenswrapper[4822]: I0224 09:58:23.058449 4822 scope.go:117] "RemoveContainer" containerID="7536a9f101dbd2bac88d83679be37627b03c6e965a1c67c7376c010e4551efbc" Feb 24 09:58:23 crc kubenswrapper[4822]: E0224 09:58:23.058955 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-cell1-galera-0_openstack(3ff049ae-9abb-4477-9f51-eee7228cedfd)\"" pod="openstack/openstack-cell1-galera-0" podUID="3ff049ae-9abb-4477-9f51-eee7228cedfd" Feb 24 09:58:23 crc kubenswrapper[4822]: I0224 09:58:23.884102 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43712: no serving certificate available for the kubelet" Feb 24 09:58:23 crc kubenswrapper[4822]: I0224 09:58:23.925574 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43714: no serving certificate available for the kubelet" Feb 24 09:58:24 crc kubenswrapper[4822]: I0224 09:58:24.089597 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43722: no serving certificate available for the kubelet" Feb 24 09:58:24 crc kubenswrapper[4822]: I0224 09:58:24.249963 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43736: no serving certificate available for the kubelet" Feb 24 09:58:24 crc kubenswrapper[4822]: I0224 09:58:24.266216 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43742: no serving certificate available for the kubelet" Feb 24 09:58:24 crc kubenswrapper[4822]: I0224 09:58:24.268377 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43758: no serving certificate available for the kubelet" Feb 24 09:58:24 crc kubenswrapper[4822]: I0224 09:58:24.276560 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43766: no serving certificate available for the kubelet" Feb 24 09:58:24 crc kubenswrapper[4822]: I0224 09:58:24.414697 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43776: no serving certificate available for the kubelet" Feb 24 09:58:24 crc kubenswrapper[4822]: I0224 09:58:24.426598 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43778: no serving certificate available for the kubelet" Feb 24 09:58:24 crc kubenswrapper[4822]: I0224 09:58:24.447367 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43792: no serving certificate available for the kubelet" Feb 24 09:58:24 crc kubenswrapper[4822]: I0224 09:58:24.448191 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43802: no serving certificate available for the kubelet" Feb 24 09:58:24 crc kubenswrapper[4822]: I0224 09:58:24.476883 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43806: no serving certificate available for the kubelet" Feb 24 09:58:24 crc kubenswrapper[4822]: I0224 09:58:24.483219 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43810: no serving certificate available for the kubelet" Feb 24 09:58:24 crc kubenswrapper[4822]: I0224 09:58:24.692464 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43826: no serving certificate available for the kubelet" Feb 24 09:58:24 crc kubenswrapper[4822]: I0224 09:58:24.698151 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43842: no serving certificate available for the kubelet" Feb 24 09:58:24 crc kubenswrapper[4822]: I0224 09:58:24.724938 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43856: no serving certificate available for the kubelet" Feb 24 09:58:24 crc kubenswrapper[4822]: I0224 09:58:24.742682 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43860: no serving certificate available for the kubelet" Feb 24 09:58:24 crc kubenswrapper[4822]: I0224 09:58:24.849056 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43868: no serving certificate available for the kubelet" Feb 24 09:58:24 crc kubenswrapper[4822]: I0224 09:58:24.870487 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43876: no serving certificate available for the kubelet" Feb 24 09:58:24 crc kubenswrapper[4822]: I0224 09:58:24.904400 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43884: no serving certificate available for the kubelet" Feb 24 09:58:24 crc kubenswrapper[4822]: I0224 09:58:24.928742 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43886: no serving certificate available for the kubelet" Feb 24 09:58:25 crc kubenswrapper[4822]: I0224 09:58:25.026652 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43900: no serving certificate available for the kubelet" Feb 24 09:58:25 crc kubenswrapper[4822]: I0224 09:58:25.082096 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43902: no serving certificate available for the kubelet" Feb 24 09:58:25 crc kubenswrapper[4822]: I0224 09:58:25.189219 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43914: no serving certificate available for the kubelet" Feb 24 09:58:25 crc kubenswrapper[4822]: I0224 09:58:25.256825 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43920: no serving certificate available for the kubelet" Feb 24 09:58:25 crc kubenswrapper[4822]: I0224 09:58:25.382464 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43928: no serving certificate available for the kubelet" Feb 24 09:58:25 crc kubenswrapper[4822]: I0224 09:58:25.422299 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43934: no serving certificate available for the kubelet" Feb 24 09:58:27 crc kubenswrapper[4822]: I0224 09:58:27.468058 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43944: no serving certificate available for the kubelet" Feb 24 09:58:27 crc kubenswrapper[4822]: I0224 09:58:27.537659 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43960: no serving certificate available for the kubelet" Feb 24 09:58:30 crc kubenswrapper[4822]: I0224 09:58:30.517084 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43974: no serving certificate available for the kubelet" Feb 24 09:58:30 crc kubenswrapper[4822]: I0224 09:58:30.576590 4822 ???:1] "http: TLS handshake error from 192.168.126.11:43988: no serving certificate available for the kubelet" Feb 24 09:58:31 crc kubenswrapper[4822]: I0224 09:58:31.616454 4822 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Feb 24 09:58:31 crc kubenswrapper[4822]: I0224 09:58:31.617463 4822 scope.go:117] "RemoveContainer" containerID="9096168b79cb1ca2340175113d8fd48922513f5c3fc8b043f568f158dcad3412" Feb 24 09:58:31 crc kubenswrapper[4822]: E0224 09:58:31.617752 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-galera-0_openstack(92cdd045-d60c-433d-b2e3-32f93299ee8e)\"" pod="openstack/openstack-galera-0" podUID="92cdd045-d60c-433d-b2e3-32f93299ee8e" Feb 24 09:58:33 crc kubenswrapper[4822]: I0224 09:58:33.574984 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34008: no serving certificate available for the kubelet" Feb 24 09:58:33 crc kubenswrapper[4822]: I0224 09:58:33.621102 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34018: no serving certificate available for the kubelet" Feb 24 09:58:36 crc kubenswrapper[4822]: I0224 09:58:36.337503 4822 scope.go:117] "RemoveContainer" containerID="7536a9f101dbd2bac88d83679be37627b03c6e965a1c67c7376c010e4551efbc" Feb 24 09:58:36 crc kubenswrapper[4822]: E0224 09:58:36.338185 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-cell1-galera-0_openstack(3ff049ae-9abb-4477-9f51-eee7228cedfd)\"" pod="openstack/openstack-cell1-galera-0" podUID="3ff049ae-9abb-4477-9f51-eee7228cedfd" Feb 24 09:58:36 crc kubenswrapper[4822]: I0224 09:58:36.613427 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34032: no serving certificate available for the kubelet" Feb 24 09:58:36 crc kubenswrapper[4822]: I0224 09:58:36.728430 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34038: no serving certificate available for the kubelet" Feb 24 09:58:39 crc kubenswrapper[4822]: I0224 09:58:39.649361 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34046: no serving certificate available for the kubelet" Feb 24 09:58:39 crc kubenswrapper[4822]: I0224 09:58:39.770397 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34058: no serving certificate available for the kubelet" Feb 24 09:58:39 crc kubenswrapper[4822]: I0224 09:58:39.776984 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34072: no serving certificate available for the kubelet" Feb 24 09:58:40 crc kubenswrapper[4822]: I0224 09:58:40.003923 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34076: no serving certificate available for the kubelet" Feb 24 09:58:40 crc kubenswrapper[4822]: I0224 09:58:40.023897 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34092: no serving certificate available for the kubelet" Feb 24 09:58:40 crc kubenswrapper[4822]: I0224 09:58:40.034841 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34094: no serving certificate available for the kubelet" Feb 24 09:58:40 crc kubenswrapper[4822]: I0224 09:58:40.149970 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34106: no serving certificate available for the kubelet" Feb 24 09:58:40 crc kubenswrapper[4822]: I0224 09:58:40.174285 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34110: no serving certificate available for the kubelet" Feb 24 09:58:40 crc kubenswrapper[4822]: I0224 09:58:40.178596 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34120: no serving certificate available for the kubelet" Feb 24 09:58:40 crc kubenswrapper[4822]: I0224 09:58:40.314114 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34128: no serving certificate available for the kubelet" Feb 24 09:58:40 crc kubenswrapper[4822]: I0224 09:58:40.485431 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34138: no serving certificate available for the kubelet" Feb 24 09:58:40 crc kubenswrapper[4822]: I0224 09:58:40.497006 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34150: no serving certificate available for the kubelet" Feb 24 09:58:40 crc kubenswrapper[4822]: I0224 09:58:40.498824 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34160: no serving certificate available for the kubelet" Feb 24 09:58:40 crc kubenswrapper[4822]: I0224 09:58:40.635611 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34164: no serving certificate available for the kubelet" Feb 24 09:58:40 crc kubenswrapper[4822]: I0224 09:58:40.685298 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34174: no serving certificate available for the kubelet" Feb 24 09:58:40 crc kubenswrapper[4822]: I0224 09:58:40.689596 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34188: no serving certificate available for the kubelet" Feb 24 09:58:40 crc kubenswrapper[4822]: I0224 09:58:40.864047 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34200: no serving certificate available for the kubelet" Feb 24 09:58:40 crc kubenswrapper[4822]: I0224 09:58:40.964484 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34208: no serving certificate available for the kubelet" Feb 24 09:58:40 crc kubenswrapper[4822]: I0224 09:58:40.996156 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34214: no serving certificate available for the kubelet" Feb 24 09:58:41 crc kubenswrapper[4822]: I0224 09:58:41.021437 4822 ???:1] "http: TLS handshake error from 192.168.126.11:34218: no serving certificate available for the kubelet" Feb 24 09:58:41 crc kubenswrapper[4822]: I0224 09:58:41.167663 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58044: no serving certificate available for the kubelet" Feb 24 09:58:41 crc kubenswrapper[4822]: I0224 09:58:41.173845 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58046: no serving certificate available for the kubelet" Feb 24 09:58:41 crc kubenswrapper[4822]: I0224 09:58:41.189412 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58052: no serving certificate available for the kubelet" Feb 24 09:58:41 crc kubenswrapper[4822]: I0224 09:58:41.321443 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58058: no serving certificate available for the kubelet" Feb 24 09:58:41 crc kubenswrapper[4822]: I0224 09:58:41.464682 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58070: no serving certificate available for the kubelet" Feb 24 09:58:41 crc kubenswrapper[4822]: I0224 09:58:41.472195 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58078: no serving certificate available for the kubelet" Feb 24 09:58:41 crc kubenswrapper[4822]: I0224 09:58:41.504718 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58090: no serving certificate available for the kubelet" Feb 24 09:58:41 crc kubenswrapper[4822]: I0224 09:58:41.537581 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58104: no serving certificate available for the kubelet" Feb 24 09:58:41 crc kubenswrapper[4822]: I0224 09:58:41.640277 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58120: no serving certificate available for the kubelet" Feb 24 09:58:41 crc kubenswrapper[4822]: I0224 09:58:41.651896 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58128: no serving certificate available for the kubelet" Feb 24 09:58:41 crc kubenswrapper[4822]: I0224 09:58:41.658535 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58132: no serving certificate available for the kubelet" Feb 24 09:58:41 crc kubenswrapper[4822]: I0224 09:58:41.809451 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58140: no serving certificate available for the kubelet" Feb 24 09:58:41 crc kubenswrapper[4822]: I0224 09:58:41.836969 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58156: no serving certificate available for the kubelet" Feb 24 09:58:42 crc kubenswrapper[4822]: I0224 09:58:41.999772 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58164: no serving certificate available for the kubelet" Feb 24 09:58:42 crc kubenswrapper[4822]: I0224 09:58:42.008752 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58180: no serving certificate available for the kubelet" Feb 24 09:58:42 crc kubenswrapper[4822]: I0224 09:58:42.059036 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58194: no serving certificate available for the kubelet" Feb 24 09:58:42 crc kubenswrapper[4822]: I0224 09:58:42.143439 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58206: no serving certificate available for the kubelet" Feb 24 09:58:42 crc kubenswrapper[4822]: I0224 09:58:42.194408 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58218: no serving certificate available for the kubelet" Feb 24 09:58:42 crc kubenswrapper[4822]: I0224 09:58:42.208756 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58226: no serving certificate available for the kubelet" Feb 24 09:58:42 crc kubenswrapper[4822]: I0224 09:58:42.362282 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58234: no serving certificate available for the kubelet" Feb 24 09:58:42 crc kubenswrapper[4822]: I0224 09:58:42.518393 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58244: no serving certificate available for the kubelet" Feb 24 09:58:42 crc kubenswrapper[4822]: I0224 09:58:42.521064 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58256: no serving certificate available for the kubelet" Feb 24 09:58:42 crc kubenswrapper[4822]: I0224 09:58:42.569108 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58268: no serving certificate available for the kubelet" Feb 24 09:58:42 crc kubenswrapper[4822]: I0224 09:58:42.681287 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58270: no serving certificate available for the kubelet" Feb 24 09:58:42 crc kubenswrapper[4822]: I0224 09:58:42.756878 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58278: no serving certificate available for the kubelet" Feb 24 09:58:42 crc kubenswrapper[4822]: I0224 09:58:42.767496 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58288: no serving certificate available for the kubelet" Feb 24 09:58:42 crc kubenswrapper[4822]: I0224 09:58:42.773042 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58296: no serving certificate available for the kubelet" Feb 24 09:58:42 crc kubenswrapper[4822]: I0224 09:58:42.802182 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58302: no serving certificate available for the kubelet" Feb 24 09:58:45 crc kubenswrapper[4822]: I0224 09:58:45.337211 4822 scope.go:117] "RemoveContainer" containerID="9096168b79cb1ca2340175113d8fd48922513f5c3fc8b043f568f158dcad3412" Feb 24 09:58:45 crc kubenswrapper[4822]: E0224 09:58:45.337832 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-galera-0_openstack(92cdd045-d60c-433d-b2e3-32f93299ee8e)\"" pod="openstack/openstack-galera-0" podUID="92cdd045-d60c-433d-b2e3-32f93299ee8e" Feb 24 09:58:45 crc kubenswrapper[4822]: I0224 09:58:45.677091 4822 patch_prober.go:28] interesting pod/machine-config-daemon-qd752 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:58:45 crc kubenswrapper[4822]: I0224 09:58:45.677197 4822 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:58:45 crc kubenswrapper[4822]: I0224 09:58:45.677267 4822 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd752" Feb 24 09:58:45 crc kubenswrapper[4822]: I0224 09:58:45.678243 4822 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0cdf73a07b30b9289a24f8a41d923624e54e5740a1f0b1a8a379fbe0eb826e01"} pod="openshift-machine-config-operator/machine-config-daemon-qd752" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 09:58:45 crc kubenswrapper[4822]: I0224 09:58:45.678349 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerName="machine-config-daemon" containerID="cri-o://0cdf73a07b30b9289a24f8a41d923624e54e5740a1f0b1a8a379fbe0eb826e01" gracePeriod=600 Feb 24 09:58:45 crc kubenswrapper[4822]: I0224 09:58:45.733217 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58312: no serving certificate available for the kubelet" Feb 24 09:58:45 crc kubenswrapper[4822]: E0224 09:58:45.811896 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:58:45 crc kubenswrapper[4822]: I0224 09:58:45.841310 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58314: no serving certificate available for the kubelet" Feb 24 09:58:46 crc kubenswrapper[4822]: I0224 09:58:46.671290 4822 generic.go:334] "Generic (PLEG): container finished" podID="306aba52-0b6e-4d3f-b05f-757daebc5e24" containerID="0cdf73a07b30b9289a24f8a41d923624e54e5740a1f0b1a8a379fbe0eb826e01" exitCode=0 Feb 24 09:58:46 crc kubenswrapper[4822]: I0224 09:58:46.671343 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd752" event={"ID":"306aba52-0b6e-4d3f-b05f-757daebc5e24","Type":"ContainerDied","Data":"0cdf73a07b30b9289a24f8a41d923624e54e5740a1f0b1a8a379fbe0eb826e01"} Feb 24 09:58:46 crc kubenswrapper[4822]: I0224 09:58:46.671431 4822 scope.go:117] "RemoveContainer" containerID="e2d0641ae6d6ef869445847b5fa7176d9593dd9b4a97f7de961c91199df6dc6b" Feb 24 09:58:46 crc kubenswrapper[4822]: I0224 09:58:46.672346 4822 scope.go:117] "RemoveContainer" containerID="0cdf73a07b30b9289a24f8a41d923624e54e5740a1f0b1a8a379fbe0eb826e01" Feb 24 09:58:46 crc kubenswrapper[4822]: E0224 09:58:46.672863 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:58:47 crc kubenswrapper[4822]: I0224 09:58:47.337731 4822 scope.go:117] "RemoveContainer" containerID="7536a9f101dbd2bac88d83679be37627b03c6e965a1c67c7376c010e4551efbc" Feb 24 09:58:47 crc kubenswrapper[4822]: E0224 09:58:47.338252 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-cell1-galera-0_openstack(3ff049ae-9abb-4477-9f51-eee7228cedfd)\"" pod="openstack/openstack-cell1-galera-0" podUID="3ff049ae-9abb-4477-9f51-eee7228cedfd" Feb 24 09:58:48 crc kubenswrapper[4822]: I0224 09:58:48.792551 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58316: no serving certificate available for the kubelet" Feb 24 09:58:48 crc kubenswrapper[4822]: I0224 09:58:48.901264 4822 ???:1] "http: TLS handshake error from 192.168.126.11:58318: no serving certificate available for the kubelet" Feb 24 09:58:51 crc kubenswrapper[4822]: I0224 09:58:51.847445 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52134: no serving certificate available for the kubelet" Feb 24 09:58:51 crc kubenswrapper[4822]: I0224 09:58:51.956948 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52142: no serving certificate available for the kubelet" Feb 24 09:58:54 crc kubenswrapper[4822]: I0224 09:58:54.907089 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52146: no serving certificate available for the kubelet" Feb 24 09:58:55 crc kubenswrapper[4822]: I0224 09:58:55.008180 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52154: no serving certificate available for the kubelet" Feb 24 09:58:57 crc kubenswrapper[4822]: I0224 09:58:57.337641 4822 scope.go:117] "RemoveContainer" containerID="0cdf73a07b30b9289a24f8a41d923624e54e5740a1f0b1a8a379fbe0eb826e01" Feb 24 09:58:57 crc kubenswrapper[4822]: E0224 09:58:57.338376 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:58:57 crc kubenswrapper[4822]: I0224 09:58:57.948323 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52158: no serving certificate available for the kubelet" Feb 24 09:58:58 crc kubenswrapper[4822]: I0224 09:58:58.108602 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52162: no serving certificate available for the kubelet" Feb 24 09:58:59 crc kubenswrapper[4822]: I0224 09:58:59.337789 4822 scope.go:117] "RemoveContainer" containerID="9096168b79cb1ca2340175113d8fd48922513f5c3fc8b043f568f158dcad3412" Feb 24 09:58:59 crc kubenswrapper[4822]: E0224 09:58:59.338272 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-galera-0_openstack(92cdd045-d60c-433d-b2e3-32f93299ee8e)\"" pod="openstack/openstack-galera-0" podUID="92cdd045-d60c-433d-b2e3-32f93299ee8e" Feb 24 09:58:59 crc kubenswrapper[4822]: I0224 09:58:59.847962 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52176: no serving certificate available for the kubelet" Feb 24 09:58:59 crc kubenswrapper[4822]: I0224 09:58:59.863264 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52182: no serving certificate available for the kubelet" Feb 24 09:59:00 crc kubenswrapper[4822]: I0224 09:59:00.048283 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52184: no serving certificate available for the kubelet" Feb 24 09:59:00 crc kubenswrapper[4822]: I0224 09:59:00.062365 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52192: no serving certificate available for the kubelet" Feb 24 09:59:00 crc kubenswrapper[4822]: I0224 09:59:00.441299 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52206: no serving certificate available for the kubelet" Feb 24 09:59:00 crc kubenswrapper[4822]: I0224 09:59:00.453550 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52216: no serving certificate available for the kubelet" Feb 24 09:59:00 crc kubenswrapper[4822]: I0224 09:59:00.454753 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52232: no serving certificate available for the kubelet" Feb 24 09:59:00 crc kubenswrapper[4822]: I0224 09:59:00.467454 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52236: no serving certificate available for the kubelet" Feb 24 09:59:00 crc kubenswrapper[4822]: I0224 09:59:00.471049 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52248: no serving certificate available for the kubelet" Feb 24 09:59:00 crc kubenswrapper[4822]: I0224 09:59:00.473113 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52254: no serving certificate available for the kubelet" Feb 24 09:59:00 crc kubenswrapper[4822]: I0224 09:59:00.486002 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52258: no serving certificate available for the kubelet" Feb 24 09:59:00 crc kubenswrapper[4822]: I0224 09:59:00.488019 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52270: no serving certificate available for the kubelet" Feb 24 09:59:00 crc kubenswrapper[4822]: I0224 09:59:00.492492 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52280: no serving certificate available for the kubelet" Feb 24 09:59:00 crc kubenswrapper[4822]: I0224 09:59:00.502175 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52290: no serving certificate available for the kubelet" Feb 24 09:59:00 crc kubenswrapper[4822]: I0224 09:59:00.506241 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52304: no serving certificate available for the kubelet" Feb 24 09:59:00 crc kubenswrapper[4822]: I0224 09:59:00.516325 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52308: no serving certificate available for the kubelet" Feb 24 09:59:00 crc kubenswrapper[4822]: I0224 09:59:00.622621 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52316: no serving certificate available for the kubelet" Feb 24 09:59:00 crc kubenswrapper[4822]: I0224 09:59:00.631626 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52332: no serving certificate available for the kubelet" Feb 24 09:59:00 crc kubenswrapper[4822]: I0224 09:59:00.665086 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52344: no serving certificate available for the kubelet" Feb 24 09:59:00 crc kubenswrapper[4822]: I0224 09:59:00.675324 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52350: no serving certificate available for the kubelet" Feb 24 09:59:00 crc kubenswrapper[4822]: I0224 09:59:00.699581 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52356: no serving certificate available for the kubelet" Feb 24 09:59:00 crc kubenswrapper[4822]: I0224 09:59:00.710106 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52372: no serving certificate available for the kubelet" Feb 24 09:59:00 crc kubenswrapper[4822]: I0224 09:59:00.710453 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52376: no serving certificate available for the kubelet" Feb 24 09:59:00 crc kubenswrapper[4822]: I0224 09:59:00.711099 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52380: no serving certificate available for the kubelet" Feb 24 09:59:00 crc kubenswrapper[4822]: I0224 09:59:00.720623 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52392: no serving certificate available for the kubelet" Feb 24 09:59:00 crc kubenswrapper[4822]: I0224 09:59:00.720901 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52408: no serving certificate available for the kubelet" Feb 24 09:59:00 crc kubenswrapper[4822]: I0224 09:59:00.749956 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52416: no serving certificate available for the kubelet" Feb 24 09:59:00 crc kubenswrapper[4822]: I0224 09:59:00.760274 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52418: no serving certificate available for the kubelet" Feb 24 09:59:00 crc kubenswrapper[4822]: I0224 09:59:00.860181 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52428: no serving certificate available for the kubelet" Feb 24 09:59:00 crc kubenswrapper[4822]: I0224 09:59:00.865502 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52444: no serving certificate available for the kubelet" Feb 24 09:59:00 crc kubenswrapper[4822]: I0224 09:59:00.873663 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52456: no serving certificate available for the kubelet" Feb 24 09:59:00 crc kubenswrapper[4822]: I0224 09:59:00.878412 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52466: no serving certificate available for the kubelet" Feb 24 09:59:00 crc kubenswrapper[4822]: I0224 09:59:00.996379 4822 ???:1] "http: TLS handshake error from 192.168.126.11:52480: no serving certificate available for the kubelet" Feb 24 09:59:01 crc kubenswrapper[4822]: I0224 09:59:01.157490 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47958: no serving certificate available for the kubelet" Feb 24 09:59:02 crc kubenswrapper[4822]: I0224 09:59:02.338148 4822 scope.go:117] "RemoveContainer" containerID="7536a9f101dbd2bac88d83679be37627b03c6e965a1c67c7376c010e4551efbc" Feb 24 09:59:02 crc kubenswrapper[4822]: E0224 09:59:02.338765 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-cell1-galera-0_openstack(3ff049ae-9abb-4477-9f51-eee7228cedfd)\"" pod="openstack/openstack-cell1-galera-0" podUID="3ff049ae-9abb-4477-9f51-eee7228cedfd" Feb 24 09:59:04 crc kubenswrapper[4822]: I0224 09:59:04.055086 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47966: no serving certificate available for the kubelet" Feb 24 09:59:04 crc kubenswrapper[4822]: I0224 09:59:04.218511 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47982: no serving certificate available for the kubelet" Feb 24 09:59:07 crc kubenswrapper[4822]: I0224 09:59:07.118571 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47984: no serving certificate available for the kubelet" Feb 24 09:59:07 crc kubenswrapper[4822]: I0224 09:59:07.276646 4822 ???:1] "http: TLS handshake error from 192.168.126.11:47996: no serving certificate available for the kubelet" Feb 24 09:59:08 crc kubenswrapper[4822]: I0224 09:59:08.338087 4822 scope.go:117] "RemoveContainer" containerID="0cdf73a07b30b9289a24f8a41d923624e54e5740a1f0b1a8a379fbe0eb826e01" Feb 24 09:59:08 crc kubenswrapper[4822]: E0224 09:59:08.338388 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:59:10 crc kubenswrapper[4822]: I0224 09:59:10.162679 4822 ???:1] "http: TLS handshake error from 192.168.126.11:48010: no serving certificate available for the kubelet" Feb 24 09:59:10 crc kubenswrapper[4822]: I0224 09:59:10.338483 4822 scope.go:117] "RemoveContainer" containerID="9096168b79cb1ca2340175113d8fd48922513f5c3fc8b043f568f158dcad3412" Feb 24 09:59:10 crc kubenswrapper[4822]: E0224 09:59:10.338832 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-galera-0_openstack(92cdd045-d60c-433d-b2e3-32f93299ee8e)\"" pod="openstack/openstack-galera-0" podUID="92cdd045-d60c-433d-b2e3-32f93299ee8e" Feb 24 09:59:10 crc kubenswrapper[4822]: I0224 09:59:10.339328 4822 ???:1] "http: TLS handshake error from 192.168.126.11:48024: no serving certificate available for the kubelet" Feb 24 09:59:13 crc kubenswrapper[4822]: I0224 09:59:13.219036 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33408: no serving certificate available for the kubelet" Feb 24 09:59:13 crc kubenswrapper[4822]: I0224 09:59:13.337708 4822 scope.go:117] "RemoveContainer" containerID="7536a9f101dbd2bac88d83679be37627b03c6e965a1c67c7376c010e4551efbc" Feb 24 09:59:13 crc kubenswrapper[4822]: E0224 09:59:13.338198 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-cell1-galera-0_openstack(3ff049ae-9abb-4477-9f51-eee7228cedfd)\"" pod="openstack/openstack-cell1-galera-0" podUID="3ff049ae-9abb-4477-9f51-eee7228cedfd" Feb 24 09:59:13 crc kubenswrapper[4822]: I0224 09:59:13.390083 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33416: no serving certificate available for the kubelet" Feb 24 09:59:16 crc kubenswrapper[4822]: I0224 09:59:16.265652 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33428: no serving certificate available for the kubelet" Feb 24 09:59:16 crc kubenswrapper[4822]: I0224 09:59:16.449342 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33438: no serving certificate available for the kubelet" Feb 24 09:59:19 crc kubenswrapper[4822]: I0224 09:59:19.298188 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33454: no serving certificate available for the kubelet" Feb 24 09:59:19 crc kubenswrapper[4822]: I0224 09:59:19.500997 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33456: no serving certificate available for the kubelet" Feb 24 09:59:20 crc kubenswrapper[4822]: I0224 09:59:20.340526 4822 scope.go:117] "RemoveContainer" containerID="0cdf73a07b30b9289a24f8a41d923624e54e5740a1f0b1a8a379fbe0eb826e01" Feb 24 09:59:20 crc kubenswrapper[4822]: E0224 09:59:20.340970 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:59:22 crc kubenswrapper[4822]: I0224 09:59:22.338186 4822 scope.go:117] "RemoveContainer" containerID="9096168b79cb1ca2340175113d8fd48922513f5c3fc8b043f568f158dcad3412" Feb 24 09:59:22 crc kubenswrapper[4822]: E0224 09:59:22.338720 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-galera-0_openstack(92cdd045-d60c-433d-b2e3-32f93299ee8e)\"" pod="openstack/openstack-galera-0" podUID="92cdd045-d60c-433d-b2e3-32f93299ee8e" Feb 24 09:59:22 crc kubenswrapper[4822]: I0224 09:59:22.376193 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46670: no serving certificate available for the kubelet" Feb 24 09:59:22 crc kubenswrapper[4822]: I0224 09:59:22.588380 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46682: no serving certificate available for the kubelet" Feb 24 09:59:25 crc kubenswrapper[4822]: I0224 09:59:25.425621 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46692: no serving certificate available for the kubelet" Feb 24 09:59:25 crc kubenswrapper[4822]: I0224 09:59:25.637347 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46706: no serving certificate available for the kubelet" Feb 24 09:59:27 crc kubenswrapper[4822]: I0224 09:59:27.338202 4822 scope.go:117] "RemoveContainer" containerID="7536a9f101dbd2bac88d83679be37627b03c6e965a1c67c7376c010e4551efbc" Feb 24 09:59:27 crc kubenswrapper[4822]: E0224 09:59:27.338987 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-cell1-galera-0_openstack(3ff049ae-9abb-4477-9f51-eee7228cedfd)\"" pod="openstack/openstack-cell1-galera-0" podUID="3ff049ae-9abb-4477-9f51-eee7228cedfd" Feb 24 09:59:28 crc kubenswrapper[4822]: I0224 09:59:28.475536 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46718: no serving certificate available for the kubelet" Feb 24 09:59:28 crc kubenswrapper[4822]: I0224 09:59:28.707203 4822 ???:1] "http: TLS handshake error from 192.168.126.11:46730: no serving certificate available for the kubelet" Feb 24 09:59:31 crc kubenswrapper[4822]: I0224 09:59:31.338852 4822 scope.go:117] "RemoveContainer" containerID="0cdf73a07b30b9289a24f8a41d923624e54e5740a1f0b1a8a379fbe0eb826e01" Feb 24 09:59:31 crc kubenswrapper[4822]: E0224 09:59:31.341171 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:59:31 crc kubenswrapper[4822]: I0224 09:59:31.531617 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40714: no serving certificate available for the kubelet" Feb 24 09:59:31 crc kubenswrapper[4822]: I0224 09:59:31.769607 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40728: no serving certificate available for the kubelet" Feb 24 09:59:34 crc kubenswrapper[4822]: I0224 09:59:34.595032 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40730: no serving certificate available for the kubelet" Feb 24 09:59:34 crc kubenswrapper[4822]: I0224 09:59:34.829038 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40742: no serving certificate available for the kubelet" Feb 24 09:59:37 crc kubenswrapper[4822]: I0224 09:59:37.337665 4822 scope.go:117] "RemoveContainer" containerID="9096168b79cb1ca2340175113d8fd48922513f5c3fc8b043f568f158dcad3412" Feb 24 09:59:37 crc kubenswrapper[4822]: E0224 09:59:37.338175 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-galera-0_openstack(92cdd045-d60c-433d-b2e3-32f93299ee8e)\"" pod="openstack/openstack-galera-0" podUID="92cdd045-d60c-433d-b2e3-32f93299ee8e" Feb 24 09:59:37 crc kubenswrapper[4822]: I0224 09:59:37.705585 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40754: no serving certificate available for the kubelet" Feb 24 09:59:37 crc kubenswrapper[4822]: I0224 09:59:37.866701 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40756: no serving certificate available for the kubelet" Feb 24 09:59:40 crc kubenswrapper[4822]: I0224 09:59:40.765374 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40766: no serving certificate available for the kubelet" Feb 24 09:59:40 crc kubenswrapper[4822]: I0224 09:59:40.934264 4822 ???:1] "http: TLS handshake error from 192.168.126.11:40774: no serving certificate available for the kubelet" Feb 24 09:59:41 crc kubenswrapper[4822]: I0224 09:59:41.337781 4822 scope.go:117] "RemoveContainer" containerID="7536a9f101dbd2bac88d83679be37627b03c6e965a1c67c7376c010e4551efbc" Feb 24 09:59:41 crc kubenswrapper[4822]: E0224 09:59:41.338225 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-cell1-galera-0_openstack(3ff049ae-9abb-4477-9f51-eee7228cedfd)\"" pod="openstack/openstack-cell1-galera-0" podUID="3ff049ae-9abb-4477-9f51-eee7228cedfd" Feb 24 09:59:43 crc kubenswrapper[4822]: I0224 09:59:43.826521 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35998: no serving certificate available for the kubelet" Feb 24 09:59:44 crc kubenswrapper[4822]: I0224 09:59:44.030209 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36008: no serving certificate available for the kubelet" Feb 24 09:59:45 crc kubenswrapper[4822]: I0224 09:59:45.339419 4822 scope.go:117] "RemoveContainer" containerID="0cdf73a07b30b9289a24f8a41d923624e54e5740a1f0b1a8a379fbe0eb826e01" Feb 24 09:59:45 crc kubenswrapper[4822]: E0224 09:59:45.340205 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:59:46 crc kubenswrapper[4822]: I0224 09:59:46.881807 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36010: no serving certificate available for the kubelet" Feb 24 09:59:47 crc kubenswrapper[4822]: I0224 09:59:47.089102 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36020: no serving certificate available for the kubelet" Feb 24 09:59:49 crc kubenswrapper[4822]: I0224 09:59:49.339364 4822 scope.go:117] "RemoveContainer" containerID="9096168b79cb1ca2340175113d8fd48922513f5c3fc8b043f568f158dcad3412" Feb 24 09:59:49 crc kubenswrapper[4822]: E0224 09:59:49.340177 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-galera-0_openstack(92cdd045-d60c-433d-b2e3-32f93299ee8e)\"" pod="openstack/openstack-galera-0" podUID="92cdd045-d60c-433d-b2e3-32f93299ee8e" Feb 24 09:59:49 crc kubenswrapper[4822]: I0224 09:59:49.931011 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36022: no serving certificate available for the kubelet" Feb 24 09:59:50 crc kubenswrapper[4822]: I0224 09:59:50.194698 4822 ???:1] "http: TLS handshake error from 192.168.126.11:36038: no serving certificate available for the kubelet" Feb 24 09:59:50 crc kubenswrapper[4822]: I0224 09:59:50.310368 4822 generic.go:334] "Generic (PLEG): container finished" podID="244fb64f-3d89-480f-b297-abc7a1b5a448" containerID="2872a84efb2502c4f702fbc9f79ff2cfae13d6d10a37633189c91f5ec5355d3c" exitCode=0 Feb 24 09:59:50 crc kubenswrapper[4822]: I0224 09:59:50.310426 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6jz7r/must-gather-bhc9h" event={"ID":"244fb64f-3d89-480f-b297-abc7a1b5a448","Type":"ContainerDied","Data":"2872a84efb2502c4f702fbc9f79ff2cfae13d6d10a37633189c91f5ec5355d3c"} Feb 24 09:59:50 crc kubenswrapper[4822]: I0224 09:59:50.311057 4822 scope.go:117] "RemoveContainer" containerID="2872a84efb2502c4f702fbc9f79ff2cfae13d6d10a37633189c91f5ec5355d3c" Feb 24 09:59:52 crc kubenswrapper[4822]: I0224 09:59:52.989651 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57726: no serving certificate available for the kubelet" Feb 24 09:59:53 crc kubenswrapper[4822]: I0224 09:59:53.253160 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57736: no serving certificate available for the kubelet" Feb 24 09:59:54 crc kubenswrapper[4822]: I0224 09:59:54.950862 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57738: no serving certificate available for the kubelet" Feb 24 09:59:55 crc kubenswrapper[4822]: I0224 09:59:55.118855 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57740: no serving certificate available for the kubelet" Feb 24 09:59:55 crc kubenswrapper[4822]: I0224 09:59:55.132041 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57750: no serving certificate available for the kubelet" Feb 24 09:59:55 crc kubenswrapper[4822]: I0224 09:59:55.153981 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57762: no serving certificate available for the kubelet" Feb 24 09:59:55 crc kubenswrapper[4822]: I0224 09:59:55.167168 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57778: no serving certificate available for the kubelet" Feb 24 09:59:55 crc kubenswrapper[4822]: I0224 09:59:55.184077 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57786: no serving certificate available for the kubelet" Feb 24 09:59:55 crc kubenswrapper[4822]: I0224 09:59:55.197078 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57796: no serving certificate available for the kubelet" Feb 24 09:59:55 crc kubenswrapper[4822]: I0224 09:59:55.215558 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57800: no serving certificate available for the kubelet" Feb 24 09:59:55 crc kubenswrapper[4822]: I0224 09:59:55.231044 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57814: no serving certificate available for the kubelet" Feb 24 09:59:55 crc kubenswrapper[4822]: I0224 09:59:55.337867 4822 scope.go:117] "RemoveContainer" containerID="7536a9f101dbd2bac88d83679be37627b03c6e965a1c67c7376c010e4551efbc" Feb 24 09:59:55 crc kubenswrapper[4822]: E0224 09:59:55.338623 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-cell1-galera-0_openstack(3ff049ae-9abb-4477-9f51-eee7228cedfd)\"" pod="openstack/openstack-cell1-galera-0" podUID="3ff049ae-9abb-4477-9f51-eee7228cedfd" Feb 24 09:59:55 crc kubenswrapper[4822]: I0224 09:59:55.414993 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57822: no serving certificate available for the kubelet" Feb 24 09:59:55 crc kubenswrapper[4822]: I0224 09:59:55.429115 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57838: no serving certificate available for the kubelet" Feb 24 09:59:55 crc kubenswrapper[4822]: I0224 09:59:55.451443 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57852: no serving certificate available for the kubelet" Feb 24 09:59:55 crc kubenswrapper[4822]: I0224 09:59:55.465578 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57858: no serving certificate available for the kubelet" Feb 24 09:59:55 crc kubenswrapper[4822]: I0224 09:59:55.485585 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57870: no serving certificate available for the kubelet" Feb 24 09:59:55 crc kubenswrapper[4822]: I0224 09:59:55.499341 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57884: no serving certificate available for the kubelet" Feb 24 09:59:55 crc kubenswrapper[4822]: I0224 09:59:55.515825 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57892: no serving certificate available for the kubelet" Feb 24 09:59:55 crc kubenswrapper[4822]: I0224 09:59:55.528167 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57900: no serving certificate available for the kubelet" Feb 24 09:59:56 crc kubenswrapper[4822]: I0224 09:59:56.048851 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57906: no serving certificate available for the kubelet" Feb 24 09:59:56 crc kubenswrapper[4822]: I0224 09:59:56.307619 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57912: no serving certificate available for the kubelet" Feb 24 09:59:56 crc kubenswrapper[4822]: I0224 09:59:56.338532 4822 scope.go:117] "RemoveContainer" containerID="0cdf73a07b30b9289a24f8a41d923624e54e5740a1f0b1a8a379fbe0eb826e01" Feb 24 09:59:56 crc kubenswrapper[4822]: E0224 09:59:56.339258 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 09:59:59 crc kubenswrapper[4822]: I0224 09:59:59.106410 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57922: no serving certificate available for the kubelet" Feb 24 09:59:59 crc kubenswrapper[4822]: I0224 09:59:59.367534 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57928: no serving certificate available for the kubelet" Feb 24 10:00:00 crc kubenswrapper[4822]: I0224 10:00:00.169173 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532120-5hv9x"] Feb 24 10:00:00 crc kubenswrapper[4822]: E0224 10:00:00.169889 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de3523eb-cfe0-4994-9342-9b59c22dc6b4" containerName="extract-utilities" Feb 24 10:00:00 crc kubenswrapper[4822]: I0224 10:00:00.169969 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="de3523eb-cfe0-4994-9342-9b59c22dc6b4" containerName="extract-utilities" Feb 24 10:00:00 crc kubenswrapper[4822]: E0224 10:00:00.170034 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de3523eb-cfe0-4994-9342-9b59c22dc6b4" containerName="extract-content" Feb 24 10:00:00 crc kubenswrapper[4822]: I0224 10:00:00.170056 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="de3523eb-cfe0-4994-9342-9b59c22dc6b4" containerName="extract-content" Feb 24 10:00:00 crc kubenswrapper[4822]: E0224 10:00:00.170092 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de3523eb-cfe0-4994-9342-9b59c22dc6b4" containerName="registry-server" Feb 24 10:00:00 crc kubenswrapper[4822]: I0224 10:00:00.170110 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="de3523eb-cfe0-4994-9342-9b59c22dc6b4" containerName="registry-server" Feb 24 10:00:00 crc kubenswrapper[4822]: I0224 10:00:00.170541 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="de3523eb-cfe0-4994-9342-9b59c22dc6b4" containerName="registry-server" Feb 24 10:00:00 crc kubenswrapper[4822]: I0224 10:00:00.171708 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532120-5hv9x" Feb 24 10:00:00 crc kubenswrapper[4822]: I0224 10:00:00.174393 4822 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 24 10:00:00 crc kubenswrapper[4822]: I0224 10:00:00.174722 4822 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 24 10:00:00 crc kubenswrapper[4822]: I0224 10:00:00.185082 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532120-5hv9x"] Feb 24 10:00:00 crc kubenswrapper[4822]: I0224 10:00:00.267722 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3efd477-d6a0-41b6-8607-4c68c2b8dda0-secret-volume\") pod \"collect-profiles-29532120-5hv9x\" (UID: \"b3efd477-d6a0-41b6-8607-4c68c2b8dda0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532120-5hv9x" Feb 24 10:00:00 crc kubenswrapper[4822]: I0224 10:00:00.267788 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5zpg\" (UniqueName: \"kubernetes.io/projected/b3efd477-d6a0-41b6-8607-4c68c2b8dda0-kube-api-access-c5zpg\") pod \"collect-profiles-29532120-5hv9x\" (UID: \"b3efd477-d6a0-41b6-8607-4c68c2b8dda0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532120-5hv9x" Feb 24 10:00:00 crc kubenswrapper[4822]: I0224 10:00:00.267834 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3efd477-d6a0-41b6-8607-4c68c2b8dda0-config-volume\") pod \"collect-profiles-29532120-5hv9x\" (UID: \"b3efd477-d6a0-41b6-8607-4c68c2b8dda0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532120-5hv9x" Feb 24 10:00:00 crc kubenswrapper[4822]: I0224 10:00:00.369216 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3efd477-d6a0-41b6-8607-4c68c2b8dda0-secret-volume\") pod \"collect-profiles-29532120-5hv9x\" (UID: \"b3efd477-d6a0-41b6-8607-4c68c2b8dda0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532120-5hv9x" Feb 24 10:00:00 crc kubenswrapper[4822]: I0224 10:00:00.369587 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5zpg\" (UniqueName: \"kubernetes.io/projected/b3efd477-d6a0-41b6-8607-4c68c2b8dda0-kube-api-access-c5zpg\") pod \"collect-profiles-29532120-5hv9x\" (UID: \"b3efd477-d6a0-41b6-8607-4c68c2b8dda0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532120-5hv9x" Feb 24 10:00:00 crc kubenswrapper[4822]: I0224 10:00:00.369621 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3efd477-d6a0-41b6-8607-4c68c2b8dda0-config-volume\") pod \"collect-profiles-29532120-5hv9x\" (UID: \"b3efd477-d6a0-41b6-8607-4c68c2b8dda0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532120-5hv9x" Feb 24 10:00:00 crc kubenswrapper[4822]: I0224 10:00:00.370581 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3efd477-d6a0-41b6-8607-4c68c2b8dda0-config-volume\") pod \"collect-profiles-29532120-5hv9x\" (UID: \"b3efd477-d6a0-41b6-8607-4c68c2b8dda0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532120-5hv9x" Feb 24 10:00:00 crc kubenswrapper[4822]: I0224 10:00:00.380467 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3efd477-d6a0-41b6-8607-4c68c2b8dda0-secret-volume\") pod \"collect-profiles-29532120-5hv9x\" (UID: \"b3efd477-d6a0-41b6-8607-4c68c2b8dda0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532120-5hv9x" Feb 24 10:00:00 crc kubenswrapper[4822]: I0224 10:00:00.385244 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5zpg\" (UniqueName: \"kubernetes.io/projected/b3efd477-d6a0-41b6-8607-4c68c2b8dda0-kube-api-access-c5zpg\") pod \"collect-profiles-29532120-5hv9x\" (UID: \"b3efd477-d6a0-41b6-8607-4c68c2b8dda0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532120-5hv9x" Feb 24 10:00:00 crc kubenswrapper[4822]: I0224 10:00:00.492227 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532120-5hv9x" Feb 24 10:00:00 crc kubenswrapper[4822]: I0224 10:00:00.928455 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6jz7r/must-gather-bhc9h"] Feb 24 10:00:00 crc kubenswrapper[4822]: I0224 10:00:00.928684 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-6jz7r/must-gather-bhc9h" podUID="244fb64f-3d89-480f-b297-abc7a1b5a448" containerName="copy" containerID="cri-o://17e0bcac5a2b98ab9d066c86f71a11e9d2952346bbda60a2db1dd0e647fddefe" gracePeriod=2 Feb 24 10:00:00 crc kubenswrapper[4822]: I0224 10:00:00.935865 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6jz7r/must-gather-bhc9h"] Feb 24 10:00:01 crc kubenswrapper[4822]: I0224 10:00:01.010436 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532120-5hv9x"] Feb 24 10:00:01 crc kubenswrapper[4822]: W0224 10:00:01.049762 4822 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3efd477_d6a0_41b6_8607_4c68c2b8dda0.slice/crio-38bb8a992edeb6c6fac93ff431d06d17ea20941d800f2730a742cc7fa8fece66 WatchSource:0}: Error finding container 38bb8a992edeb6c6fac93ff431d06d17ea20941d800f2730a742cc7fa8fece66: Status 404 returned error can't find the container with id 38bb8a992edeb6c6fac93ff431d06d17ea20941d800f2730a742cc7fa8fece66 Feb 24 10:00:01 crc kubenswrapper[4822]: I0224 10:00:01.268507 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6jz7r_must-gather-bhc9h_244fb64f-3d89-480f-b297-abc7a1b5a448/copy/0.log" Feb 24 10:00:01 crc kubenswrapper[4822]: I0224 10:00:01.269279 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6jz7r/must-gather-bhc9h" Feb 24 10:00:01 crc kubenswrapper[4822]: I0224 10:00:01.386059 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/244fb64f-3d89-480f-b297-abc7a1b5a448-must-gather-output\") pod \"244fb64f-3d89-480f-b297-abc7a1b5a448\" (UID: \"244fb64f-3d89-480f-b297-abc7a1b5a448\") " Feb 24 10:00:01 crc kubenswrapper[4822]: I0224 10:00:01.386198 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqcwv\" (UniqueName: \"kubernetes.io/projected/244fb64f-3d89-480f-b297-abc7a1b5a448-kube-api-access-mqcwv\") pod \"244fb64f-3d89-480f-b297-abc7a1b5a448\" (UID: \"244fb64f-3d89-480f-b297-abc7a1b5a448\") " Feb 24 10:00:01 crc kubenswrapper[4822]: I0224 10:00:01.392332 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/244fb64f-3d89-480f-b297-abc7a1b5a448-kube-api-access-mqcwv" (OuterVolumeSpecName: "kube-api-access-mqcwv") pod "244fb64f-3d89-480f-b297-abc7a1b5a448" (UID: "244fb64f-3d89-480f-b297-abc7a1b5a448"). InnerVolumeSpecName "kube-api-access-mqcwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:00:01 crc kubenswrapper[4822]: I0224 10:00:01.417345 4822 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6jz7r_must-gather-bhc9h_244fb64f-3d89-480f-b297-abc7a1b5a448/copy/0.log" Feb 24 10:00:01 crc kubenswrapper[4822]: I0224 10:00:01.417746 4822 generic.go:334] "Generic (PLEG): container finished" podID="244fb64f-3d89-480f-b297-abc7a1b5a448" containerID="17e0bcac5a2b98ab9d066c86f71a11e9d2952346bbda60a2db1dd0e647fddefe" exitCode=143 Feb 24 10:00:01 crc kubenswrapper[4822]: I0224 10:00:01.417827 4822 scope.go:117] "RemoveContainer" containerID="17e0bcac5a2b98ab9d066c86f71a11e9d2952346bbda60a2db1dd0e647fddefe" Feb 24 10:00:01 crc kubenswrapper[4822]: I0224 10:00:01.417838 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6jz7r/must-gather-bhc9h" Feb 24 10:00:01 crc kubenswrapper[4822]: I0224 10:00:01.419260 4822 generic.go:334] "Generic (PLEG): container finished" podID="b3efd477-d6a0-41b6-8607-4c68c2b8dda0" containerID="34c455ea75797b7004f626abd9ede0595a43714ffebd36cc12df36f28388d4ab" exitCode=0 Feb 24 10:00:01 crc kubenswrapper[4822]: I0224 10:00:01.419304 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532120-5hv9x" event={"ID":"b3efd477-d6a0-41b6-8607-4c68c2b8dda0","Type":"ContainerDied","Data":"34c455ea75797b7004f626abd9ede0595a43714ffebd36cc12df36f28388d4ab"} Feb 24 10:00:01 crc kubenswrapper[4822]: I0224 10:00:01.419334 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532120-5hv9x" event={"ID":"b3efd477-d6a0-41b6-8607-4c68c2b8dda0","Type":"ContainerStarted","Data":"38bb8a992edeb6c6fac93ff431d06d17ea20941d800f2730a742cc7fa8fece66"} Feb 24 10:00:01 crc kubenswrapper[4822]: I0224 10:00:01.459051 4822 scope.go:117] "RemoveContainer" containerID="2872a84efb2502c4f702fbc9f79ff2cfae13d6d10a37633189c91f5ec5355d3c" Feb 24 10:00:01 crc kubenswrapper[4822]: I0224 10:00:01.488429 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqcwv\" (UniqueName: \"kubernetes.io/projected/244fb64f-3d89-480f-b297-abc7a1b5a448-kube-api-access-mqcwv\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:01 crc kubenswrapper[4822]: I0224 10:00:01.492456 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/244fb64f-3d89-480f-b297-abc7a1b5a448-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "244fb64f-3d89-480f-b297-abc7a1b5a448" (UID: "244fb64f-3d89-480f-b297-abc7a1b5a448"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:00:01 crc kubenswrapper[4822]: I0224 10:00:01.525373 4822 scope.go:117] "RemoveContainer" containerID="17e0bcac5a2b98ab9d066c86f71a11e9d2952346bbda60a2db1dd0e647fddefe" Feb 24 10:00:01 crc kubenswrapper[4822]: E0224 10:00:01.526127 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17e0bcac5a2b98ab9d066c86f71a11e9d2952346bbda60a2db1dd0e647fddefe\": container with ID starting with 17e0bcac5a2b98ab9d066c86f71a11e9d2952346bbda60a2db1dd0e647fddefe not found: ID does not exist" containerID="17e0bcac5a2b98ab9d066c86f71a11e9d2952346bbda60a2db1dd0e647fddefe" Feb 24 10:00:01 crc kubenswrapper[4822]: I0224 10:00:01.526163 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17e0bcac5a2b98ab9d066c86f71a11e9d2952346bbda60a2db1dd0e647fddefe"} err="failed to get container status \"17e0bcac5a2b98ab9d066c86f71a11e9d2952346bbda60a2db1dd0e647fddefe\": rpc error: code = NotFound desc = could not find container \"17e0bcac5a2b98ab9d066c86f71a11e9d2952346bbda60a2db1dd0e647fddefe\": container with ID starting with 17e0bcac5a2b98ab9d066c86f71a11e9d2952346bbda60a2db1dd0e647fddefe not found: ID does not exist" Feb 24 10:00:01 crc kubenswrapper[4822]: I0224 10:00:01.526189 4822 scope.go:117] "RemoveContainer" containerID="2872a84efb2502c4f702fbc9f79ff2cfae13d6d10a37633189c91f5ec5355d3c" Feb 24 10:00:01 crc kubenswrapper[4822]: E0224 10:00:01.526612 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2872a84efb2502c4f702fbc9f79ff2cfae13d6d10a37633189c91f5ec5355d3c\": container with ID starting with 2872a84efb2502c4f702fbc9f79ff2cfae13d6d10a37633189c91f5ec5355d3c not found: ID does not exist" containerID="2872a84efb2502c4f702fbc9f79ff2cfae13d6d10a37633189c91f5ec5355d3c" Feb 24 10:00:01 crc kubenswrapper[4822]: I0224 10:00:01.526665 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2872a84efb2502c4f702fbc9f79ff2cfae13d6d10a37633189c91f5ec5355d3c"} err="failed to get container status \"2872a84efb2502c4f702fbc9f79ff2cfae13d6d10a37633189c91f5ec5355d3c\": rpc error: code = NotFound desc = could not find container \"2872a84efb2502c4f702fbc9f79ff2cfae13d6d10a37633189c91f5ec5355d3c\": container with ID starting with 2872a84efb2502c4f702fbc9f79ff2cfae13d6d10a37633189c91f5ec5355d3c not found: ID does not exist" Feb 24 10:00:01 crc kubenswrapper[4822]: I0224 10:00:01.592071 4822 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/244fb64f-3d89-480f-b297-abc7a1b5a448-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:02 crc kubenswrapper[4822]: I0224 10:00:02.145944 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33414: no serving certificate available for the kubelet" Feb 24 10:00:02 crc kubenswrapper[4822]: I0224 10:00:02.337581 4822 scope.go:117] "RemoveContainer" containerID="9096168b79cb1ca2340175113d8fd48922513f5c3fc8b043f568f158dcad3412" Feb 24 10:00:02 crc kubenswrapper[4822]: E0224 10:00:02.337875 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-galera-0_openstack(92cdd045-d60c-433d-b2e3-32f93299ee8e)\"" pod="openstack/openstack-galera-0" podUID="92cdd045-d60c-433d-b2e3-32f93299ee8e" Feb 24 10:00:02 crc kubenswrapper[4822]: I0224 10:00:02.344698 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="244fb64f-3d89-480f-b297-abc7a1b5a448" path="/var/lib/kubelet/pods/244fb64f-3d89-480f-b297-abc7a1b5a448/volumes" Feb 24 10:00:02 crc kubenswrapper[4822]: I0224 10:00:02.404460 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33422: no serving certificate available for the kubelet" Feb 24 10:00:02 crc kubenswrapper[4822]: I0224 10:00:02.682693 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532120-5hv9x" Feb 24 10:00:02 crc kubenswrapper[4822]: I0224 10:00:02.811742 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3efd477-d6a0-41b6-8607-4c68c2b8dda0-secret-volume\") pod \"b3efd477-d6a0-41b6-8607-4c68c2b8dda0\" (UID: \"b3efd477-d6a0-41b6-8607-4c68c2b8dda0\") " Feb 24 10:00:02 crc kubenswrapper[4822]: I0224 10:00:02.812221 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5zpg\" (UniqueName: \"kubernetes.io/projected/b3efd477-d6a0-41b6-8607-4c68c2b8dda0-kube-api-access-c5zpg\") pod \"b3efd477-d6a0-41b6-8607-4c68c2b8dda0\" (UID: \"b3efd477-d6a0-41b6-8607-4c68c2b8dda0\") " Feb 24 10:00:02 crc kubenswrapper[4822]: I0224 10:00:02.812289 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3efd477-d6a0-41b6-8607-4c68c2b8dda0-config-volume\") pod \"b3efd477-d6a0-41b6-8607-4c68c2b8dda0\" (UID: \"b3efd477-d6a0-41b6-8607-4c68c2b8dda0\") " Feb 24 10:00:02 crc kubenswrapper[4822]: I0224 10:00:02.812974 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3efd477-d6a0-41b6-8607-4c68c2b8dda0-config-volume" (OuterVolumeSpecName: "config-volume") pod "b3efd477-d6a0-41b6-8607-4c68c2b8dda0" (UID: "b3efd477-d6a0-41b6-8607-4c68c2b8dda0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:00:02 crc kubenswrapper[4822]: I0224 10:00:02.820091 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3efd477-d6a0-41b6-8607-4c68c2b8dda0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b3efd477-d6a0-41b6-8607-4c68c2b8dda0" (UID: "b3efd477-d6a0-41b6-8607-4c68c2b8dda0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 10:00:02 crc kubenswrapper[4822]: I0224 10:00:02.831781 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3efd477-d6a0-41b6-8607-4c68c2b8dda0-kube-api-access-c5zpg" (OuterVolumeSpecName: "kube-api-access-c5zpg") pod "b3efd477-d6a0-41b6-8607-4c68c2b8dda0" (UID: "b3efd477-d6a0-41b6-8607-4c68c2b8dda0"). InnerVolumeSpecName "kube-api-access-c5zpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:00:02 crc kubenswrapper[4822]: I0224 10:00:02.914251 4822 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3efd477-d6a0-41b6-8607-4c68c2b8dda0-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:02 crc kubenswrapper[4822]: I0224 10:00:02.914283 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5zpg\" (UniqueName: \"kubernetes.io/projected/b3efd477-d6a0-41b6-8607-4c68c2b8dda0-kube-api-access-c5zpg\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:02 crc kubenswrapper[4822]: I0224 10:00:02.914292 4822 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3efd477-d6a0-41b6-8607-4c68c2b8dda0-config-volume\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:03 crc kubenswrapper[4822]: I0224 10:00:03.439490 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532120-5hv9x" event={"ID":"b3efd477-d6a0-41b6-8607-4c68c2b8dda0","Type":"ContainerDied","Data":"38bb8a992edeb6c6fac93ff431d06d17ea20941d800f2730a742cc7fa8fece66"} Feb 24 10:00:03 crc kubenswrapper[4822]: I0224 10:00:03.439550 4822 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38bb8a992edeb6c6fac93ff431d06d17ea20941d800f2730a742cc7fa8fece66" Feb 24 10:00:03 crc kubenswrapper[4822]: I0224 10:00:03.439577 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532120-5hv9x" Feb 24 10:00:03 crc kubenswrapper[4822]: I0224 10:00:03.765462 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532075-8zpfk"] Feb 24 10:00:03 crc kubenswrapper[4822]: I0224 10:00:03.775158 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532075-8zpfk"] Feb 24 10:00:04 crc kubenswrapper[4822]: I0224 10:00:04.351144 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bda61b9a-bb37-43aa-87a1-00f9182d98ec" path="/var/lib/kubelet/pods/bda61b9a-bb37-43aa-87a1-00f9182d98ec/volumes" Feb 24 10:00:05 crc kubenswrapper[4822]: I0224 10:00:05.211090 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33430: no serving certificate available for the kubelet" Feb 24 10:00:05 crc kubenswrapper[4822]: I0224 10:00:05.458245 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33436: no serving certificate available for the kubelet" Feb 24 10:00:06 crc kubenswrapper[4822]: I0224 10:00:06.311146 4822 scope.go:117] "RemoveContainer" containerID="e16dc8a7d6918d7b5591aff756005bdc24d10677beee7d165800c3fdcc948d5a" Feb 24 10:00:08 crc kubenswrapper[4822]: I0224 10:00:08.263831 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33452: no serving certificate available for the kubelet" Feb 24 10:00:08 crc kubenswrapper[4822]: I0224 10:00:08.346468 4822 scope.go:117] "RemoveContainer" containerID="0cdf73a07b30b9289a24f8a41d923624e54e5740a1f0b1a8a379fbe0eb826e01" Feb 24 10:00:08 crc kubenswrapper[4822]: E0224 10:00:08.347033 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 10:00:08 crc kubenswrapper[4822]: I0224 10:00:08.515539 4822 ???:1] "http: TLS handshake error from 192.168.126.11:33458: no serving certificate available for the kubelet" Feb 24 10:00:09 crc kubenswrapper[4822]: I0224 10:00:09.337167 4822 scope.go:117] "RemoveContainer" containerID="7536a9f101dbd2bac88d83679be37627b03c6e965a1c67c7376c010e4551efbc" Feb 24 10:00:09 crc kubenswrapper[4822]: E0224 10:00:09.338323 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-cell1-galera-0_openstack(3ff049ae-9abb-4477-9f51-eee7228cedfd)\"" pod="openstack/openstack-cell1-galera-0" podUID="3ff049ae-9abb-4477-9f51-eee7228cedfd" Feb 24 10:00:11 crc kubenswrapper[4822]: I0224 10:00:11.311349 4822 ???:1] "http: TLS handshake error from 192.168.126.11:48114: no serving certificate available for the kubelet" Feb 24 10:00:11 crc kubenswrapper[4822]: I0224 10:00:11.566021 4822 ???:1] "http: TLS handshake error from 192.168.126.11:48128: no serving certificate available for the kubelet" Feb 24 10:00:13 crc kubenswrapper[4822]: I0224 10:00:13.337664 4822 scope.go:117] "RemoveContainer" containerID="9096168b79cb1ca2340175113d8fd48922513f5c3fc8b043f568f158dcad3412" Feb 24 10:00:13 crc kubenswrapper[4822]: E0224 10:00:13.338372 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-galera-0_openstack(92cdd045-d60c-433d-b2e3-32f93299ee8e)\"" pod="openstack/openstack-galera-0" podUID="92cdd045-d60c-433d-b2e3-32f93299ee8e" Feb 24 10:00:14 crc kubenswrapper[4822]: I0224 10:00:14.384426 4822 ???:1] "http: TLS handshake error from 192.168.126.11:48136: no serving certificate available for the kubelet" Feb 24 10:00:14 crc kubenswrapper[4822]: I0224 10:00:14.610464 4822 ???:1] "http: TLS handshake error from 192.168.126.11:48150: no serving certificate available for the kubelet" Feb 24 10:00:17 crc kubenswrapper[4822]: I0224 10:00:17.502783 4822 ???:1] "http: TLS handshake error from 192.168.126.11:48154: no serving certificate available for the kubelet" Feb 24 10:00:17 crc kubenswrapper[4822]: I0224 10:00:17.662211 4822 ???:1] "http: TLS handshake error from 192.168.126.11:48158: no serving certificate available for the kubelet" Feb 24 10:00:20 crc kubenswrapper[4822]: I0224 10:00:20.553394 4822 ???:1] "http: TLS handshake error from 192.168.126.11:48164: no serving certificate available for the kubelet" Feb 24 10:00:20 crc kubenswrapper[4822]: I0224 10:00:20.711008 4822 ???:1] "http: TLS handshake error from 192.168.126.11:48180: no serving certificate available for the kubelet" Feb 24 10:00:21 crc kubenswrapper[4822]: I0224 10:00:21.338617 4822 scope.go:117] "RemoveContainer" containerID="7536a9f101dbd2bac88d83679be37627b03c6e965a1c67c7376c010e4551efbc" Feb 24 10:00:21 crc kubenswrapper[4822]: I0224 10:00:21.338803 4822 scope.go:117] "RemoveContainer" containerID="0cdf73a07b30b9289a24f8a41d923624e54e5740a1f0b1a8a379fbe0eb826e01" Feb 24 10:00:21 crc kubenswrapper[4822]: E0224 10:00:21.339063 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-cell1-galera-0_openstack(3ff049ae-9abb-4477-9f51-eee7228cedfd)\"" pod="openstack/openstack-cell1-galera-0" podUID="3ff049ae-9abb-4477-9f51-eee7228cedfd" Feb 24 10:00:21 crc kubenswrapper[4822]: E0224 10:00:21.339422 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 10:00:23 crc kubenswrapper[4822]: I0224 10:00:23.607250 4822 ???:1] "http: TLS handshake error from 192.168.126.11:51522: no serving certificate available for the kubelet" Feb 24 10:00:23 crc kubenswrapper[4822]: I0224 10:00:23.774231 4822 ???:1] "http: TLS handshake error from 192.168.126.11:51538: no serving certificate available for the kubelet" Feb 24 10:00:26 crc kubenswrapper[4822]: I0224 10:00:26.662476 4822 ???:1] "http: TLS handshake error from 192.168.126.11:51550: no serving certificate available for the kubelet" Feb 24 10:00:26 crc kubenswrapper[4822]: I0224 10:00:26.827094 4822 ???:1] "http: TLS handshake error from 192.168.126.11:51560: no serving certificate available for the kubelet" Feb 24 10:00:27 crc kubenswrapper[4822]: I0224 10:00:27.337986 4822 scope.go:117] "RemoveContainer" containerID="9096168b79cb1ca2340175113d8fd48922513f5c3fc8b043f568f158dcad3412" Feb 24 10:00:27 crc kubenswrapper[4822]: E0224 10:00:27.338452 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-galera-0_openstack(92cdd045-d60c-433d-b2e3-32f93299ee8e)\"" pod="openstack/openstack-galera-0" podUID="92cdd045-d60c-433d-b2e3-32f93299ee8e" Feb 24 10:00:29 crc kubenswrapper[4822]: I0224 10:00:29.709048 4822 ???:1] "http: TLS handshake error from 192.168.126.11:51564: no serving certificate available for the kubelet" Feb 24 10:00:29 crc kubenswrapper[4822]: I0224 10:00:29.887306 4822 ???:1] "http: TLS handshake error from 192.168.126.11:51572: no serving certificate available for the kubelet" Feb 24 10:00:32 crc kubenswrapper[4822]: I0224 10:00:32.799691 4822 ???:1] "http: TLS handshake error from 192.168.126.11:60884: no serving certificate available for the kubelet" Feb 24 10:00:32 crc kubenswrapper[4822]: I0224 10:00:32.935351 4822 ???:1] "http: TLS handshake error from 192.168.126.11:60886: no serving certificate available for the kubelet" Feb 24 10:00:35 crc kubenswrapper[4822]: I0224 10:00:35.337641 4822 scope.go:117] "RemoveContainer" containerID="7536a9f101dbd2bac88d83679be37627b03c6e965a1c67c7376c010e4551efbc" Feb 24 10:00:35 crc kubenswrapper[4822]: I0224 10:00:35.338322 4822 scope.go:117] "RemoveContainer" containerID="0cdf73a07b30b9289a24f8a41d923624e54e5740a1f0b1a8a379fbe0eb826e01" Feb 24 10:00:35 crc kubenswrapper[4822]: E0224 10:00:35.338487 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-cell1-galera-0_openstack(3ff049ae-9abb-4477-9f51-eee7228cedfd)\"" pod="openstack/openstack-cell1-galera-0" podUID="3ff049ae-9abb-4477-9f51-eee7228cedfd" Feb 24 10:00:35 crc kubenswrapper[4822]: E0224 10:00:35.338705 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 10:00:35 crc kubenswrapper[4822]: I0224 10:00:35.851304 4822 ???:1] "http: TLS handshake error from 192.168.126.11:60894: no serving certificate available for the kubelet" Feb 24 10:00:35 crc kubenswrapper[4822]: I0224 10:00:35.973400 4822 ???:1] "http: TLS handshake error from 192.168.126.11:60898: no serving certificate available for the kubelet" Feb 24 10:00:38 crc kubenswrapper[4822]: I0224 10:00:38.343937 4822 scope.go:117] "RemoveContainer" containerID="9096168b79cb1ca2340175113d8fd48922513f5c3fc8b043f568f158dcad3412" Feb 24 10:00:38 crc kubenswrapper[4822]: E0224 10:00:38.344363 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-galera-0_openstack(92cdd045-d60c-433d-b2e3-32f93299ee8e)\"" pod="openstack/openstack-galera-0" podUID="92cdd045-d60c-433d-b2e3-32f93299ee8e" Feb 24 10:00:38 crc kubenswrapper[4822]: I0224 10:00:38.916082 4822 ???:1] "http: TLS handshake error from 192.168.126.11:60900: no serving certificate available for the kubelet" Feb 24 10:00:39 crc kubenswrapper[4822]: I0224 10:00:39.027180 4822 ???:1] "http: TLS handshake error from 192.168.126.11:60906: no serving certificate available for the kubelet" Feb 24 10:00:41 crc kubenswrapper[4822]: I0224 10:00:41.960395 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53058: no serving certificate available for the kubelet" Feb 24 10:00:42 crc kubenswrapper[4822]: I0224 10:00:42.090419 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53070: no serving certificate available for the kubelet" Feb 24 10:00:45 crc kubenswrapper[4822]: I0224 10:00:45.016595 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53076: no serving certificate available for the kubelet" Feb 24 10:00:45 crc kubenswrapper[4822]: I0224 10:00:45.154861 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53088: no serving certificate available for the kubelet" Feb 24 10:00:46 crc kubenswrapper[4822]: I0224 10:00:46.337813 4822 scope.go:117] "RemoveContainer" containerID="0cdf73a07b30b9289a24f8a41d923624e54e5740a1f0b1a8a379fbe0eb826e01" Feb 24 10:00:46 crc kubenswrapper[4822]: I0224 10:00:46.338317 4822 scope.go:117] "RemoveContainer" containerID="7536a9f101dbd2bac88d83679be37627b03c6e965a1c67c7376c010e4551efbc" Feb 24 10:00:46 crc kubenswrapper[4822]: E0224 10:00:46.338581 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 10:00:46 crc kubenswrapper[4822]: E0224 10:00:46.338590 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-cell1-galera-0_openstack(3ff049ae-9abb-4477-9f51-eee7228cedfd)\"" pod="openstack/openstack-cell1-galera-0" podUID="3ff049ae-9abb-4477-9f51-eee7228cedfd" Feb 24 10:00:48 crc kubenswrapper[4822]: I0224 10:00:48.078477 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53090: no serving certificate available for the kubelet" Feb 24 10:00:48 crc kubenswrapper[4822]: I0224 10:00:48.216225 4822 ???:1] "http: TLS handshake error from 192.168.126.11:53096: no serving certificate available for the kubelet" Feb 24 10:00:50 crc kubenswrapper[4822]: I0224 10:00:50.337172 4822 scope.go:117] "RemoveContainer" containerID="9096168b79cb1ca2340175113d8fd48922513f5c3fc8b043f568f158dcad3412" Feb 24 10:00:50 crc kubenswrapper[4822]: E0224 10:00:50.337871 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-galera-0_openstack(92cdd045-d60c-433d-b2e3-32f93299ee8e)\"" pod="openstack/openstack-galera-0" podUID="92cdd045-d60c-433d-b2e3-32f93299ee8e" Feb 24 10:00:50 crc kubenswrapper[4822]: I0224 10:00:50.924517 4822 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9vrhv"] Feb 24 10:00:50 crc kubenswrapper[4822]: E0224 10:00:50.925014 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="244fb64f-3d89-480f-b297-abc7a1b5a448" containerName="copy" Feb 24 10:00:50 crc kubenswrapper[4822]: I0224 10:00:50.925044 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="244fb64f-3d89-480f-b297-abc7a1b5a448" containerName="copy" Feb 24 10:00:50 crc kubenswrapper[4822]: E0224 10:00:50.925064 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3efd477-d6a0-41b6-8607-4c68c2b8dda0" containerName="collect-profiles" Feb 24 10:00:50 crc kubenswrapper[4822]: I0224 10:00:50.925077 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3efd477-d6a0-41b6-8607-4c68c2b8dda0" containerName="collect-profiles" Feb 24 10:00:50 crc kubenswrapper[4822]: E0224 10:00:50.925109 4822 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="244fb64f-3d89-480f-b297-abc7a1b5a448" containerName="gather" Feb 24 10:00:50 crc kubenswrapper[4822]: I0224 10:00:50.925122 4822 state_mem.go:107] "Deleted CPUSet assignment" podUID="244fb64f-3d89-480f-b297-abc7a1b5a448" containerName="gather" Feb 24 10:00:50 crc kubenswrapper[4822]: I0224 10:00:50.925421 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="244fb64f-3d89-480f-b297-abc7a1b5a448" containerName="copy" Feb 24 10:00:50 crc kubenswrapper[4822]: I0224 10:00:50.925451 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="244fb64f-3d89-480f-b297-abc7a1b5a448" containerName="gather" Feb 24 10:00:50 crc kubenswrapper[4822]: I0224 10:00:50.925487 4822 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3efd477-d6a0-41b6-8607-4c68c2b8dda0" containerName="collect-profiles" Feb 24 10:00:50 crc kubenswrapper[4822]: I0224 10:00:50.928634 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9vrhv" Feb 24 10:00:50 crc kubenswrapper[4822]: I0224 10:00:50.932456 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9vrhv"] Feb 24 10:00:51 crc kubenswrapper[4822]: I0224 10:00:51.036492 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzz4g\" (UniqueName: \"kubernetes.io/projected/6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf-kube-api-access-pzz4g\") pod \"redhat-operators-9vrhv\" (UID: \"6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf\") " pod="openshift-marketplace/redhat-operators-9vrhv" Feb 24 10:00:51 crc kubenswrapper[4822]: I0224 10:00:51.037198 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf-catalog-content\") pod \"redhat-operators-9vrhv\" (UID: \"6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf\") " pod="openshift-marketplace/redhat-operators-9vrhv" Feb 24 10:00:51 crc kubenswrapper[4822]: I0224 10:00:51.037372 4822 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf-utilities\") pod \"redhat-operators-9vrhv\" (UID: \"6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf\") " pod="openshift-marketplace/redhat-operators-9vrhv" Feb 24 10:00:51 crc kubenswrapper[4822]: I0224 10:00:51.120619 4822 ???:1] "http: TLS handshake error from 192.168.126.11:48086: no serving certificate available for the kubelet" Feb 24 10:00:51 crc kubenswrapper[4822]: I0224 10:00:51.139398 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzz4g\" (UniqueName: \"kubernetes.io/projected/6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf-kube-api-access-pzz4g\") pod \"redhat-operators-9vrhv\" (UID: \"6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf\") " pod="openshift-marketplace/redhat-operators-9vrhv" Feb 24 10:00:51 crc kubenswrapper[4822]: I0224 10:00:51.139437 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf-catalog-content\") pod \"redhat-operators-9vrhv\" (UID: \"6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf\") " pod="openshift-marketplace/redhat-operators-9vrhv" Feb 24 10:00:51 crc kubenswrapper[4822]: I0224 10:00:51.139458 4822 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf-utilities\") pod \"redhat-operators-9vrhv\" (UID: \"6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf\") " pod="openshift-marketplace/redhat-operators-9vrhv" Feb 24 10:00:51 crc kubenswrapper[4822]: I0224 10:00:51.139881 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf-utilities\") pod \"redhat-operators-9vrhv\" (UID: \"6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf\") " pod="openshift-marketplace/redhat-operators-9vrhv" Feb 24 10:00:51 crc kubenswrapper[4822]: I0224 10:00:51.140022 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf-catalog-content\") pod \"redhat-operators-9vrhv\" (UID: \"6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf\") " pod="openshift-marketplace/redhat-operators-9vrhv" Feb 24 10:00:51 crc kubenswrapper[4822]: I0224 10:00:51.169158 4822 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzz4g\" (UniqueName: \"kubernetes.io/projected/6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf-kube-api-access-pzz4g\") pod \"redhat-operators-9vrhv\" (UID: \"6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf\") " pod="openshift-marketplace/redhat-operators-9vrhv" Feb 24 10:00:51 crc kubenswrapper[4822]: I0224 10:00:51.252240 4822 ???:1] "http: TLS handshake error from 192.168.126.11:48088: no serving certificate available for the kubelet" Feb 24 10:00:51 crc kubenswrapper[4822]: I0224 10:00:51.293355 4822 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9vrhv" Feb 24 10:00:51 crc kubenswrapper[4822]: I0224 10:00:51.745081 4822 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9vrhv"] Feb 24 10:00:52 crc kubenswrapper[4822]: I0224 10:00:52.066160 4822 generic.go:334] "Generic (PLEG): container finished" podID="6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf" containerID="09521015512ca2b68188c9c8f2d43696d143e559dd1cb42db9e5f377925b17bc" exitCode=0 Feb 24 10:00:52 crc kubenswrapper[4822]: I0224 10:00:52.066204 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vrhv" event={"ID":"6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf","Type":"ContainerDied","Data":"09521015512ca2b68188c9c8f2d43696d143e559dd1cb42db9e5f377925b17bc"} Feb 24 10:00:52 crc kubenswrapper[4822]: I0224 10:00:52.066472 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vrhv" event={"ID":"6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf","Type":"ContainerStarted","Data":"ecd34f103f0f77f4b6a30e22ebd911e1eb29cb5c244363c55c45f16e57e5ea6f"} Feb 24 10:00:52 crc kubenswrapper[4822]: I0224 10:00:52.068569 4822 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 10:00:54 crc kubenswrapper[4822]: I0224 10:00:54.089271 4822 generic.go:334] "Generic (PLEG): container finished" podID="6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf" containerID="fad8b7acbc2b1716f54a5091a29ca75807aaffa67057882e95d1c1c460baea27" exitCode=0 Feb 24 10:00:54 crc kubenswrapper[4822]: I0224 10:00:54.089370 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vrhv" event={"ID":"6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf","Type":"ContainerDied","Data":"fad8b7acbc2b1716f54a5091a29ca75807aaffa67057882e95d1c1c460baea27"} Feb 24 10:00:54 crc kubenswrapper[4822]: I0224 10:00:54.169166 4822 ???:1] "http: TLS handshake error from 192.168.126.11:48098: no serving certificate available for the kubelet" Feb 24 10:00:54 crc kubenswrapper[4822]: I0224 10:00:54.310939 4822 ???:1] "http: TLS handshake error from 192.168.126.11:48106: no serving certificate available for the kubelet" Feb 24 10:00:55 crc kubenswrapper[4822]: I0224 10:00:55.104220 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vrhv" event={"ID":"6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf","Type":"ContainerStarted","Data":"2f32d01166400e99034c4a360c4ce17ddbaed08c0424b3e5066c56409d6a27a8"} Feb 24 10:00:55 crc kubenswrapper[4822]: I0224 10:00:55.126013 4822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9vrhv" podStartSLOduration=2.696368185 podStartE2EDuration="5.125986066s" podCreationTimestamp="2026-02-24 10:00:50 +0000 UTC" firstStartedPulling="2026-02-24 10:00:52.068379954 +0000 UTC m=+3174.456142492" lastFinishedPulling="2026-02-24 10:00:54.497997825 +0000 UTC m=+3176.885760373" observedRunningTime="2026-02-24 10:00:55.122238776 +0000 UTC m=+3177.510001354" watchObservedRunningTime="2026-02-24 10:00:55.125986066 +0000 UTC m=+3177.513748644" Feb 24 10:00:57 crc kubenswrapper[4822]: I0224 10:00:57.214368 4822 ???:1] "http: TLS handshake error from 192.168.126.11:48116: no serving certificate available for the kubelet" Feb 24 10:00:57 crc kubenswrapper[4822]: I0224 10:00:57.337743 4822 scope.go:117] "RemoveContainer" containerID="7536a9f101dbd2bac88d83679be37627b03c6e965a1c67c7376c010e4551efbc" Feb 24 10:00:57 crc kubenswrapper[4822]: E0224 10:00:57.338287 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-cell1-galera-0_openstack(3ff049ae-9abb-4477-9f51-eee7228cedfd)\"" pod="openstack/openstack-cell1-galera-0" podUID="3ff049ae-9abb-4477-9f51-eee7228cedfd" Feb 24 10:00:57 crc kubenswrapper[4822]: I0224 10:00:57.362113 4822 ???:1] "http: TLS handshake error from 192.168.126.11:48122: no serving certificate available for the kubelet" Feb 24 10:00:59 crc kubenswrapper[4822]: I0224 10:00:59.338522 4822 scope.go:117] "RemoveContainer" containerID="0cdf73a07b30b9289a24f8a41d923624e54e5740a1f0b1a8a379fbe0eb826e01" Feb 24 10:00:59 crc kubenswrapper[4822]: E0224 10:00:59.339371 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 10:01:00 crc kubenswrapper[4822]: I0224 10:01:00.255018 4822 ???:1] "http: TLS handshake error from 192.168.126.11:48132: no serving certificate available for the kubelet" Feb 24 10:01:00 crc kubenswrapper[4822]: I0224 10:01:00.406628 4822 ???:1] "http: TLS handshake error from 192.168.126.11:48140: no serving certificate available for the kubelet" Feb 24 10:01:01 crc kubenswrapper[4822]: I0224 10:01:01.294250 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9vrhv" Feb 24 10:01:01 crc kubenswrapper[4822]: I0224 10:01:01.294734 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9vrhv" Feb 24 10:01:02 crc kubenswrapper[4822]: I0224 10:01:02.337627 4822 scope.go:117] "RemoveContainer" containerID="9096168b79cb1ca2340175113d8fd48922513f5c3fc8b043f568f158dcad3412" Feb 24 10:01:02 crc kubenswrapper[4822]: E0224 10:01:02.338385 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-galera-0_openstack(92cdd045-d60c-433d-b2e3-32f93299ee8e)\"" pod="openstack/openstack-galera-0" podUID="92cdd045-d60c-433d-b2e3-32f93299ee8e" Feb 24 10:01:02 crc kubenswrapper[4822]: I0224 10:01:02.374665 4822 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9vrhv" podUID="6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf" containerName="registry-server" probeResult="failure" output=< Feb 24 10:01:02 crc kubenswrapper[4822]: timeout: failed to connect service ":50051" within 1s Feb 24 10:01:02 crc kubenswrapper[4822]: > Feb 24 10:01:03 crc kubenswrapper[4822]: I0224 10:01:03.321320 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57366: no serving certificate available for the kubelet" Feb 24 10:01:03 crc kubenswrapper[4822]: I0224 10:01:03.465294 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57376: no serving certificate available for the kubelet" Feb 24 10:01:06 crc kubenswrapper[4822]: I0224 10:01:06.392559 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57378: no serving certificate available for the kubelet" Feb 24 10:01:06 crc kubenswrapper[4822]: I0224 10:01:06.525853 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57394: no serving certificate available for the kubelet" Feb 24 10:01:09 crc kubenswrapper[4822]: I0224 10:01:09.445181 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57406: no serving certificate available for the kubelet" Feb 24 10:01:09 crc kubenswrapper[4822]: I0224 10:01:09.583154 4822 ???:1] "http: TLS handshake error from 192.168.126.11:57420: no serving certificate available for the kubelet" Feb 24 10:01:11 crc kubenswrapper[4822]: I0224 10:01:11.376846 4822 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9vrhv" Feb 24 10:01:11 crc kubenswrapper[4822]: I0224 10:01:11.458366 4822 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9vrhv" Feb 24 10:01:12 crc kubenswrapper[4822]: I0224 10:01:12.323710 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9vrhv"] Feb 24 10:01:12 crc kubenswrapper[4822]: I0224 10:01:12.337703 4822 scope.go:117] "RemoveContainer" containerID="0cdf73a07b30b9289a24f8a41d923624e54e5740a1f0b1a8a379fbe0eb826e01" Feb 24 10:01:12 crc kubenswrapper[4822]: E0224 10:01:12.337954 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 10:01:12 crc kubenswrapper[4822]: I0224 10:01:12.338232 4822 scope.go:117] "RemoveContainer" containerID="7536a9f101dbd2bac88d83679be37627b03c6e965a1c67c7376c010e4551efbc" Feb 24 10:01:12 crc kubenswrapper[4822]: E0224 10:01:12.338556 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-cell1-galera-0_openstack(3ff049ae-9abb-4477-9f51-eee7228cedfd)\"" pod="openstack/openstack-cell1-galera-0" podUID="3ff049ae-9abb-4477-9f51-eee7228cedfd" Feb 24 10:01:12 crc kubenswrapper[4822]: I0224 10:01:12.499227 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54760: no serving certificate available for the kubelet" Feb 24 10:01:12 crc kubenswrapper[4822]: I0224 10:01:12.629706 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54768: no serving certificate available for the kubelet" Feb 24 10:01:13 crc kubenswrapper[4822]: I0224 10:01:13.315094 4822 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9vrhv" podUID="6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf" containerName="registry-server" containerID="cri-o://2f32d01166400e99034c4a360c4ce17ddbaed08c0424b3e5066c56409d6a27a8" gracePeriod=2 Feb 24 10:01:13 crc kubenswrapper[4822]: I0224 10:01:13.853493 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9vrhv" Feb 24 10:01:13 crc kubenswrapper[4822]: I0224 10:01:13.881996 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf-utilities\") pod \"6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf\" (UID: \"6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf\") " Feb 24 10:01:13 crc kubenswrapper[4822]: I0224 10:01:13.882187 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzz4g\" (UniqueName: \"kubernetes.io/projected/6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf-kube-api-access-pzz4g\") pod \"6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf\" (UID: \"6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf\") " Feb 24 10:01:13 crc kubenswrapper[4822]: I0224 10:01:13.882234 4822 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf-catalog-content\") pod \"6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf\" (UID: \"6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf\") " Feb 24 10:01:13 crc kubenswrapper[4822]: I0224 10:01:13.883835 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf-utilities" (OuterVolumeSpecName: "utilities") pod "6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf" (UID: "6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:01:13 crc kubenswrapper[4822]: I0224 10:01:13.893080 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf-kube-api-access-pzz4g" (OuterVolumeSpecName: "kube-api-access-pzz4g") pod "6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf" (UID: "6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf"). InnerVolumeSpecName "kube-api-access-pzz4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:01:13 crc kubenswrapper[4822]: I0224 10:01:13.984233 4822 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzz4g\" (UniqueName: \"kubernetes.io/projected/6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf-kube-api-access-pzz4g\") on node \"crc\" DevicePath \"\"" Feb 24 10:01:13 crc kubenswrapper[4822]: I0224 10:01:13.984296 4822 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 10:01:14 crc kubenswrapper[4822]: I0224 10:01:14.013309 4822 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf" (UID: "6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:01:14 crc kubenswrapper[4822]: I0224 10:01:14.086538 4822 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 10:01:14 crc kubenswrapper[4822]: I0224 10:01:14.329822 4822 generic.go:334] "Generic (PLEG): container finished" podID="6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf" containerID="2f32d01166400e99034c4a360c4ce17ddbaed08c0424b3e5066c56409d6a27a8" exitCode=0 Feb 24 10:01:14 crc kubenswrapper[4822]: I0224 10:01:14.329884 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vrhv" event={"ID":"6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf","Type":"ContainerDied","Data":"2f32d01166400e99034c4a360c4ce17ddbaed08c0424b3e5066c56409d6a27a8"} Feb 24 10:01:14 crc kubenswrapper[4822]: I0224 10:01:14.329939 4822 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9vrhv" Feb 24 10:01:14 crc kubenswrapper[4822]: I0224 10:01:14.329947 4822 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vrhv" event={"ID":"6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf","Type":"ContainerDied","Data":"ecd34f103f0f77f4b6a30e22ebd911e1eb29cb5c244363c55c45f16e57e5ea6f"} Feb 24 10:01:14 crc kubenswrapper[4822]: I0224 10:01:14.329980 4822 scope.go:117] "RemoveContainer" containerID="2f32d01166400e99034c4a360c4ce17ddbaed08c0424b3e5066c56409d6a27a8" Feb 24 10:01:14 crc kubenswrapper[4822]: I0224 10:01:14.362948 4822 scope.go:117] "RemoveContainer" containerID="fad8b7acbc2b1716f54a5091a29ca75807aaffa67057882e95d1c1c460baea27" Feb 24 10:01:14 crc kubenswrapper[4822]: I0224 10:01:14.394681 4822 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9vrhv"] Feb 24 10:01:14 crc kubenswrapper[4822]: I0224 10:01:14.403840 4822 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9vrhv"] Feb 24 10:01:14 crc kubenswrapper[4822]: I0224 10:01:14.415890 4822 scope.go:117] "RemoveContainer" containerID="09521015512ca2b68188c9c8f2d43696d143e559dd1cb42db9e5f377925b17bc" Feb 24 10:01:14 crc kubenswrapper[4822]: I0224 10:01:14.460209 4822 scope.go:117] "RemoveContainer" containerID="2f32d01166400e99034c4a360c4ce17ddbaed08c0424b3e5066c56409d6a27a8" Feb 24 10:01:14 crc kubenswrapper[4822]: E0224 10:01:14.460719 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f32d01166400e99034c4a360c4ce17ddbaed08c0424b3e5066c56409d6a27a8\": container with ID starting with 2f32d01166400e99034c4a360c4ce17ddbaed08c0424b3e5066c56409d6a27a8 not found: ID does not exist" containerID="2f32d01166400e99034c4a360c4ce17ddbaed08c0424b3e5066c56409d6a27a8" Feb 24 10:01:14 crc kubenswrapper[4822]: I0224 10:01:14.460761 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f32d01166400e99034c4a360c4ce17ddbaed08c0424b3e5066c56409d6a27a8"} err="failed to get container status \"2f32d01166400e99034c4a360c4ce17ddbaed08c0424b3e5066c56409d6a27a8\": rpc error: code = NotFound desc = could not find container \"2f32d01166400e99034c4a360c4ce17ddbaed08c0424b3e5066c56409d6a27a8\": container with ID starting with 2f32d01166400e99034c4a360c4ce17ddbaed08c0424b3e5066c56409d6a27a8 not found: ID does not exist" Feb 24 10:01:14 crc kubenswrapper[4822]: I0224 10:01:14.460781 4822 scope.go:117] "RemoveContainer" containerID="fad8b7acbc2b1716f54a5091a29ca75807aaffa67057882e95d1c1c460baea27" Feb 24 10:01:14 crc kubenswrapper[4822]: E0224 10:01:14.461391 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fad8b7acbc2b1716f54a5091a29ca75807aaffa67057882e95d1c1c460baea27\": container with ID starting with fad8b7acbc2b1716f54a5091a29ca75807aaffa67057882e95d1c1c460baea27 not found: ID does not exist" containerID="fad8b7acbc2b1716f54a5091a29ca75807aaffa67057882e95d1c1c460baea27" Feb 24 10:01:14 crc kubenswrapper[4822]: I0224 10:01:14.461461 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fad8b7acbc2b1716f54a5091a29ca75807aaffa67057882e95d1c1c460baea27"} err="failed to get container status \"fad8b7acbc2b1716f54a5091a29ca75807aaffa67057882e95d1c1c460baea27\": rpc error: code = NotFound desc = could not find container \"fad8b7acbc2b1716f54a5091a29ca75807aaffa67057882e95d1c1c460baea27\": container with ID starting with fad8b7acbc2b1716f54a5091a29ca75807aaffa67057882e95d1c1c460baea27 not found: ID does not exist" Feb 24 10:01:14 crc kubenswrapper[4822]: I0224 10:01:14.461527 4822 scope.go:117] "RemoveContainer" containerID="09521015512ca2b68188c9c8f2d43696d143e559dd1cb42db9e5f377925b17bc" Feb 24 10:01:14 crc kubenswrapper[4822]: E0224 10:01:14.462044 4822 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09521015512ca2b68188c9c8f2d43696d143e559dd1cb42db9e5f377925b17bc\": container with ID starting with 09521015512ca2b68188c9c8f2d43696d143e559dd1cb42db9e5f377925b17bc not found: ID does not exist" containerID="09521015512ca2b68188c9c8f2d43696d143e559dd1cb42db9e5f377925b17bc" Feb 24 10:01:14 crc kubenswrapper[4822]: I0224 10:01:14.462074 4822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09521015512ca2b68188c9c8f2d43696d143e559dd1cb42db9e5f377925b17bc"} err="failed to get container status \"09521015512ca2b68188c9c8f2d43696d143e559dd1cb42db9e5f377925b17bc\": rpc error: code = NotFound desc = could not find container \"09521015512ca2b68188c9c8f2d43696d143e559dd1cb42db9e5f377925b17bc\": container with ID starting with 09521015512ca2b68188c9c8f2d43696d143e559dd1cb42db9e5f377925b17bc not found: ID does not exist" Feb 24 10:01:14 crc kubenswrapper[4822]: E0224 10:01:14.501708 4822 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fe917af_cefc_4c91_9f6f_24bd5f2f4bdf.slice/crio-ecd34f103f0f77f4b6a30e22ebd911e1eb29cb5c244363c55c45f16e57e5ea6f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fe917af_cefc_4c91_9f6f_24bd5f2f4bdf.slice\": RecentStats: unable to find data in memory cache]" Feb 24 10:01:15 crc kubenswrapper[4822]: I0224 10:01:15.559458 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54780: no serving certificate available for the kubelet" Feb 24 10:01:15 crc kubenswrapper[4822]: I0224 10:01:15.692846 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54792: no serving certificate available for the kubelet" Feb 24 10:01:16 crc kubenswrapper[4822]: I0224 10:01:16.337153 4822 scope.go:117] "RemoveContainer" containerID="9096168b79cb1ca2340175113d8fd48922513f5c3fc8b043f568f158dcad3412" Feb 24 10:01:16 crc kubenswrapper[4822]: E0224 10:01:16.338045 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-galera-0_openstack(92cdd045-d60c-433d-b2e3-32f93299ee8e)\"" pod="openstack/openstack-galera-0" podUID="92cdd045-d60c-433d-b2e3-32f93299ee8e" Feb 24 10:01:16 crc kubenswrapper[4822]: I0224 10:01:16.355231 4822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf" path="/var/lib/kubelet/pods/6fe917af-cefc-4c91-9f6f-24bd5f2f4bdf/volumes" Feb 24 10:01:18 crc kubenswrapper[4822]: I0224 10:01:18.621861 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54798: no serving certificate available for the kubelet" Feb 24 10:01:18 crc kubenswrapper[4822]: I0224 10:01:18.755322 4822 ???:1] "http: TLS handshake error from 192.168.126.11:54802: no serving certificate available for the kubelet" Feb 24 10:01:21 crc kubenswrapper[4822]: I0224 10:01:21.679118 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50382: no serving certificate available for the kubelet" Feb 24 10:01:21 crc kubenswrapper[4822]: I0224 10:01:21.815763 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50388: no serving certificate available for the kubelet" Feb 24 10:01:24 crc kubenswrapper[4822]: I0224 10:01:24.739801 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50396: no serving certificate available for the kubelet" Feb 24 10:01:24 crc kubenswrapper[4822]: I0224 10:01:24.864940 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50398: no serving certificate available for the kubelet" Feb 24 10:01:25 crc kubenswrapper[4822]: I0224 10:01:25.338050 4822 scope.go:117] "RemoveContainer" containerID="0cdf73a07b30b9289a24f8a41d923624e54e5740a1f0b1a8a379fbe0eb826e01" Feb 24 10:01:25 crc kubenswrapper[4822]: E0224 10:01:25.338491 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 10:01:25 crc kubenswrapper[4822]: I0224 10:01:25.416670 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50400: no serving certificate available for the kubelet" Feb 24 10:01:27 crc kubenswrapper[4822]: I0224 10:01:27.338556 4822 scope.go:117] "RemoveContainer" containerID="7536a9f101dbd2bac88d83679be37627b03c6e965a1c67c7376c010e4551efbc" Feb 24 10:01:27 crc kubenswrapper[4822]: E0224 10:01:27.339313 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-cell1-galera-0_openstack(3ff049ae-9abb-4477-9f51-eee7228cedfd)\"" pod="openstack/openstack-cell1-galera-0" podUID="3ff049ae-9abb-4477-9f51-eee7228cedfd" Feb 24 10:01:27 crc kubenswrapper[4822]: I0224 10:01:27.801995 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50404: no serving certificate available for the kubelet" Feb 24 10:01:27 crc kubenswrapper[4822]: I0224 10:01:27.926705 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50414: no serving certificate available for the kubelet" Feb 24 10:01:30 crc kubenswrapper[4822]: I0224 10:01:30.840381 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50422: no serving certificate available for the kubelet" Feb 24 10:01:30 crc kubenswrapper[4822]: I0224 10:01:30.979085 4822 ???:1] "http: TLS handshake error from 192.168.126.11:50430: no serving certificate available for the kubelet" Feb 24 10:01:31 crc kubenswrapper[4822]: I0224 10:01:31.337278 4822 scope.go:117] "RemoveContainer" containerID="9096168b79cb1ca2340175113d8fd48922513f5c3fc8b043f568f158dcad3412" Feb 24 10:01:31 crc kubenswrapper[4822]: E0224 10:01:31.337582 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-galera-0_openstack(92cdd045-d60c-433d-b2e3-32f93299ee8e)\"" pod="openstack/openstack-galera-0" podUID="92cdd045-d60c-433d-b2e3-32f93299ee8e" Feb 24 10:01:33 crc kubenswrapper[4822]: I0224 10:01:33.916872 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35840: no serving certificate available for the kubelet" Feb 24 10:01:34 crc kubenswrapper[4822]: I0224 10:01:34.018418 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35856: no serving certificate available for the kubelet" Feb 24 10:01:36 crc kubenswrapper[4822]: I0224 10:01:36.337632 4822 scope.go:117] "RemoveContainer" containerID="0cdf73a07b30b9289a24f8a41d923624e54e5740a1f0b1a8a379fbe0eb826e01" Feb 24 10:01:36 crc kubenswrapper[4822]: E0224 10:01:36.338218 4822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd752_openshift-machine-config-operator(306aba52-0b6e-4d3f-b05f-757daebc5e24)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd752" podUID="306aba52-0b6e-4d3f-b05f-757daebc5e24" Feb 24 10:01:36 crc kubenswrapper[4822]: I0224 10:01:36.971994 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35860: no serving certificate available for the kubelet" Feb 24 10:01:37 crc kubenswrapper[4822]: I0224 10:01:37.057050 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35872: no serving certificate available for the kubelet" Feb 24 10:01:40 crc kubenswrapper[4822]: I0224 10:01:40.020879 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35876: no serving certificate available for the kubelet" Feb 24 10:01:40 crc kubenswrapper[4822]: I0224 10:01:40.114477 4822 ???:1] "http: TLS handshake error from 192.168.126.11:35886: no serving certificate available for the kubelet"